Tag: code

KEEP DENSE_RANK versus ROW_NUMBER – further details

I found this nice post from Laurent Schneider the other day and wanted  to comment, but my comments were a bit more of a digression and discussion, so I’ve blogged it and put a link on the comments of the post by Laurent.

I’d always used the ROW_NUMBER method myself until I read this and then figured I’d try the KEEP DENSE_RANK method, which works, as Laurent describes. One thing that didn’t sit well with me in the post from Laurent was that he said “the second one should be more performant” – I prefer hard facts, so I decided to test it a bit and my results are below.

In the simple example that Laurent gave, it’s difficult to tell which is quickest, since the table in question only has a handful of rows and therefore any benchmarking is more susceptible to other influences, clouding the results. I figured I’d build a larger table and try it on that.

Before I did that though, I did get the plans from the two statements Laurent ran on the EMP table and both show the same resource costings:

Firstly, for the ROW_NUMBER method:

select ename
,      deptno
,      sal
from   (select ename
 ,      deptno
 ,      sal
 ,      row_number() over (partition by deptno order by sal desc,empno) r
 from   emp
 )
where  r=1;

Plan hash value: 3291446077                                                                                                             

---------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |      |    14 |   644 |     4  (25)| 00:00:01 |
|*  1 |  VIEW                    |      |    14 |   644 |     4  (25)| 00:00:01 |
|*  2 |   WINDOW SORT PUSHED RANK|      |    14 |   644 |     4  (25)| 00:00:01 |
|   3 |    TABLE ACCESS FULL     | EMP  |    14 |   644 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

 1 - filter("R"=1)
 2 - filter(ROW_NUMBER() OVER ( PARTITION BY "DEPTNO" ORDER BY
 INTERNAL_FUNCTION("SAL") DESC ,"EMPNO")<=1)

Note
-----
 - dynamic sampling used for this statement

Now, the KEEP DENSE_RANK method:

select max(ename) keep (dense_rank first order by sal desc,empno) ename
,      deptno
,      max(sal) sal
from   emp 
group by deptno;

Plan hash value: 15469362

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |    14 |   644 |     4  (25)| 00:00:01 |
|   1 |  SORT GROUP BY     |      |    14 |   644 |     4  (25)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| EMP  |    14 |   644 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement

So, the plans are very similar, especially in terms of having the same resource usage…which means they should be similar in terms of performance…running them, as Laurent did, shows around 1s response times, which, as I say, doesn’t conclusively prove which method is quickest.

OK, on to a bigger example then…

I basically created a similar table to EMP, called JEFF_EMP and added a few more columns (for later) and then put ten million rows in it, taking around 1.3GB on my system…plenty to churn through.

DROP TABLE jeff_emp PURGE
/
CREATE TABLE jeff_emp(deptno     NUMBER
                     ,ename      VARCHAR2(100)
                     ,first_name VARCHAR2(50)
                     ,initials   VARCHAR2(30)
                     ,surname    VARCHAR2(50)
                     ,sal        NUMBER
                     ,empno      NUMBER
                     )
/
INSERT INTO jeff_emp(deptno,ename,first_name,initials,surname,sal,empno)
SELECT (MOD(ROWNUM,3) + 1) * 10
,      'FIRSTNAME_'||TO_CHAR(ROWNUM)||'_INITIALS_'||TO_CHAR(ROWNUM)||'_SURNAME_'||TO_CHAR(ROWNUM)
,      'FIRSTNAME_'||TO_CHAR(ROWNUM)
,      'INITIALS_'||TO_CHAR(ROWNUM)
,      'SURNAME_'||TO_CHAR(ROWNUM)
,      ROWNUM * 100
,      ROWNUM
FROM   (SELECT LEVEL l FROM dual CONNECT BY LEVEL < 10000001) ORDER BY l / COMMIT / EXEC dbms_stats.gather_table_stats(ownname => USER, tabname => 'JEFF_EMP',estimate_percent=>10);

Now, here is the plan for the ROW_NUMBER method:

---------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |          |    10M|   868M|       |   204K  (1)| 00:40:49 |
|*  1 |  VIEW                    |          |    10M|   868M|       |   204K  (1)| 00:40:49 |
|*  2 |   WINDOW SORT PUSHED RANK|          |    10M|   629M|  1533M|   204K  (1)| 00:40:49 |
|   3 |    TABLE ACCESS FULL     | JEFF_EMP |    10M|   629M|       | 46605   (1)| 00:09:20 |
---------------------------------------------------------------------------------------------

…and the results:
You must have heard the saying “God is late but never too late.” Jeff and Leena had a child at 60, really a miracle, but it is true. generic line viagra You cipla generic cialis will be able to enjoy your sex encounter, Generic Ciallis is your companion. This will give cipla tadalafil 20mg both of you a boost as well as treat the premature ejaculation and erectile dysfunctions. Os it has nothing to http://robertrobb.com/2018/06/ generic viagra online with the required quantity.

ENAME                                                                                                    DEPTNO        SAL
---------------------------------------------------------------------------------------------------- ---------- ----------
FIRSTNAME_9999999_INITIALS_9999999_SURNAME_9999999                                                           10  999999900
FIRSTNAME_10000000_INITIALS_10000000_SURNAME_10000000                                                        20 1000000000
FIRSTNAME_9999998_INITIALS_9999998_SURNAME_9999998                                                           30  999999800

Elapsed: 00:00:24.47

…and the KEEP DENSE_RANK method plan:

-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |     3 |   198 | 47109   (2)| 00:09:26 |
|   1 |  SORT GROUP BY     |          |     3 |   198 | 47109   (2)| 00:09:26 |
|   2 |   TABLE ACCESS FULL| JEFF_EMP |    10M|   629M| 46605   (1)| 00:09:20 |
-------------------------------------------------------------------------------

…and it’s results:

ENAME                                                                                                    DEPTNO        SAL
---------------------------------------------------------------------------------------------------- ---------- ----------
FIRSTNAME_9999999_INITIALS_9999999_SURNAME_9999999                                                           10  999999900
FIRSTNAME_10000000_INITIALS_10000000_SURNAME_10000000                                                        20 1000000000
FIRSTNAME_9999998_INITIALS_9999998_SURNAME_9999998                                                           30  999999800

Elapsed: 00:00:07.76

So, reasonably clear results, indicating that the KEEP DENSE_RANK is about a third of the time to run, compared to the ROW_NUMBER method. You can also see from the plans that the ROW_NUMBER method involves use of TEMP, whereas the KEEP DENSE_RANK doesn’t, hence the slowdown.

So, Laurent was correct in his assertion that it should be more performant…but it’s nice to see the results based on a more meaningful set of data.

Now, there was one other thing that concerned me, and that was whether if you added more columns into the SQL, would it change the performance fo either method to any significant degree, so I started using the extra name columns like this:

SELECT ename
,      first_name
,      initials
,      surname
,      deptno
,      sal
FROM   (SELECT ename
        ,      first_name
        ,      initials
        ,      surname
        ,      deptno
        ,      sal
        ,      ROW_NUMBER() OVER (PARTITION BY deptno ORDER BY sal DESC,empno) r 
        FROM   jeff_emp
       )
WHERE  r = 1
/

…which has a plan of:

---------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |          |    10M|  1546M|       |   307K  (1)| 01:01:36 |
|*  1 |  VIEW                    |          |    10M|  1546M|       |   307K  (1)| 01:01:36 |
|*  2 |   WINDOW SORT PUSHED RANK|          |    10M|  1107M|  2606M|   307K  (1)| 01:01:36 |
|   3 |    TABLE ACCESS FULL     | JEFF_EMP |    10M|  1107M|       | 46605   (1)| 00:09:20 |
---------------------------------------------------------------------------------------------

…and results:

ENAME                                                                                                FIRST_NAME                                         INITIALS                       SURNAME                      DEPTNO         SAL
---------------------------------------------------------------------------------------------------- -------------------------------------------------- ------------------------------ -------------------------------------------------- ---------- ----------
FIRSTNAME_9999999_INITIALS_9999999_SURNAME_9999999                                                   FIRSTNAME_9999999                                  INITIALS_9999999               SURNAME_9999999          10  999999900
FIRSTNAME_10000000_INITIALS_10000000_SURNAME_10000000                                                FIRSTNAME_10000000                                 INITIALS_10000000              SURNAME_10000000         20 1000000000
FIRSTNAME_9999998_INITIALS_9999998_SURNAME_9999998                                                   FIRSTNAME_9999998                                  INITIALS_9999998               SURNAME_9999998          30  999999800

Elapsed: 00:00:25.76

For the KEEP DENSE_RANK I get:

SELECT MAX(ename) KEEP (DENSE_RANK FIRST ORDER BY sal DESC,empno) ename
,      MAX(first_name) KEEP (DENSE_RANK FIRST ORDER BY sal DESC,empno) first_name
,      MAX(initials) KEEP (DENSE_RANK FIRST ORDER BY sal DESC,empno) initials
,      MAX(surname) KEEP (DENSE_RANK FIRST ORDER BY sal DESC,empno) surname
,      deptno
,      MAX(sal) sal
FROM   jeff_emp 
GROUP BY deptno
/

Which has the following plan:

-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |     3 |   348 | 47109   (2)| 00:09:26 |
|   1 |  SORT GROUP BY     |          |     3 |   348 | 47109   (2)| 00:09:26 |
|   2 |   TABLE ACCESS FULL| JEFF_EMP |    10M|  1107M| 46605   (1)| 00:09:20 |
-------------------------------------------------------------------------------

…and results:

ENAME                                                                                                FIRST_NAME                                         INITIALS                       SURNAME                      DEPTNO         SAL
---------------------------------------------------------------------------------------------------- -------------------------------------------------- ------------------------------ -------------------------------------------------- ---------- ----------
FIRSTNAME_9999999_INITIALS_9999999_SURNAME_9999999                                                   FIRSTNAME_9999999                                  INITIALS_9999999               SURNAME_9999999          10  999999900
FIRSTNAME_10000000_INITIALS_10000000_SURNAME_10000000                                                FIRSTNAME_10000000                                 INITIALS_10000000              SURNAME_10000000         20 1000000000
FIRSTNAME_9999998_INITIALS_9999998_SURNAME_9999998                                                   FIRSTNAME_9999998                                  INITIALS_9999998               SURNAME_9999998          30  999999800

Elapsed: 00:00:14.56

So, the differential in performance has reduced significantly, with the KEEP DENSE_RANK around double it’s original time, whilst the ROW_NUMBER method has only increased marginally. I’ve not tested with adding additional columns, but I’m guessing (I know…I could and should test it!) it will get worse, to the extent that, eventually, the KEEP DENSE_RANK will become the worse performer. If that’s the case, then essentially, these two methods have different scalability dynamics and one should bear this in mind when considering which to choose, depending on how many DENSE_RANK’d columns you’d need to deliver your results.

Hope this helps.

RANK v DENSE_RANK v ROW_NUMBER

I was asked a question by one of my users about the difference between the RANK and ROW_NUMBER analytics yesterday, so here is a post on it…

RANK, ROW_NUMBER and also DENSE_RANK are very useful for taking a set of rows and ordering them in a defined manner, whilst giving each row a “position value”. They differ based on the approach taken to define the value of their position in the set of output rows. In some circumstances, they may all give the same value however, dependent on the data, they may differ.

An example based on the SCOTT.EMP table, helps to illustrate…

SELECT empno
,      sal
,      RANK() OVER(ORDER BY sal) rank_position
,      DENSE_RANK() OVER(ORDER BY sal) dense_rank_position
,      ROW_NUMBER() OVER(ORDER BY sal) row_number_position
FROM   emp
/

which returns, on my 11gR1 database…

EMPNO        SAL RANK_POSITION DENSE_RANK_POSITION ROW_NUMBER_POSITION
---------- ---------- ------------- ------------------- -------------------
7369        800             1                   1                   1
7900        950             2                   2                   2
7876       1100             3                   3                   3
Here the matter is clear that the reason may be poor carriage or a sort of degenerative issue; then again, an alluded agony viagra online delivery  might prescribe a squeezed nerve or an issue somewhere else. The spine or the greyandgrey.com levitra prices area below the injured part may be affected by the trauma, which paralyzes the lower body area, incapacitating the person to be facing erectile dysfunction. UK an EU citizens in their thousands have had their lives changed by  generic viagra store introducing this medication to bring the enthusiastic loving session up but mustn't underestimate the fact that total dependency on this remedy to indulge into copulation everyday could get that person addicted to it, which is not a fair sign. viagra tablets in india With his knowledge and extensive training in Prolotherapy, Dr. 7521       1250             4                   4                   4
7654       1250             4                   4                   5
7934       1300             6                   5                   6
7844       1500             7                   6                   7
7499       1600             8                   7                   8
7782       2450             9                   8                   9
7698       2850            10                   9                  10
7566       2975            11                  10                  11
7788       3000            12                  11                  12
7902       3000            12                  11                  13
7839       5000            14                  12                  14
14 rows selected.

Notice that RANK has given the two employees with SAL = 1250, the same position value of 4 and the two employees with SAL=3000, the same position value of 12. Notice also that RANK skips position values 5 and 13 as it has two entries for 4 and 12 respectively. RANK uses all numbers between 1 and 14, except 5 and 13. RANK has both repeats and gaps in it’s ordering.

DENSE_RANK is similar to RANK, in that it gives the two employees with SAL=1250, the same position value of 4, but then it does not skip over position value 5 – it simply carries on at position 5 for the next values. DENSE_RANK uses, for the position values, all numbers between 1 and 12, without leaving any out, and using 4 and 11 twice. DENSE_RANK has no gaps in it’s ordering, only repeats.

ROW_NUMBER gives each row a unique position value and consequently uses all the numbers between 1 and 14. ROW_NUMBER has no gaps or repeats in it’s ordering. Note that the position value on ROW_NUMBER is not deterministic, since the ORDER BY clause only has SAL in it. If you want to ensure the order is the same each time, you need to add further columns to the ORDER BY clause.

No pruning for MIN/MAX of partition key column

Recently, I wanted to work out the maximum value of a column on a partitioned table. The column I wanted the maximum value for, happened to be the (single and only) partition key column. The table in question was range partitioned on this single key column, into monthly partitions for 2009, with data in all the partitions behind the current date, i.e. January through mid June were populated. There were no indexes on the table.

NOTE – I tried this on 10.2.04 (AIX) and 11.1.0 (Fedora 11) – the example below is from 11.1.0.

I’ll recreate the scenario here:

CREATE TABLESPACE tsp1
datafile '/u01/app/oracle/oradata/T111/tsp1.dbf' size 100M 
autoextend off extent management local  uniform size 1m segment space management auto online
/
CREATE TABLESPACE tsp2
datafile '/u01/app/oracle/oradata/T111/tsp2.dbf' size 100M 
autoextend off extent management local  uniform size 1m segment space management auto online
/

DROP TABLE test PURGE
/
CREATE TABLE test(col_date_part_key DATE NOT NULL
,col2 VARCHAR2(2000) NOT NULL
)
PARTITION BY RANGE(col_date_part_key)
(PARTITION month_01 VALUES LESS THAN (TO_DATE(’01-FEB-2009′,’DD-MON-YYYY’)) TABLESPACE tsp1
,PARTITION month_02 VALUES LESS THAN (TO_DATE(’01-MAR-2009′,’DD-MON-YYYY’)) TABLESPACE tsp2
,PARTITION month_03 VALUES LESS THAN (TO_DATE(’01-APR-2009′,’DD-MON-YYYY’)) TABLESPACE tsp2
,PARTITION month_04 VALUES LESS THAN (TO_DATE(’01-MAY-2009′,’DD-MON-YYYY’)) TABLESPACE tsp2
,PARTITION month_05 VALUES LESS THAN (TO_DATE(’01-JUN-2009′,’DD-MON-YYYY’)) TABLESPACE tsp2
,PARTITION month_06 VALUES LESS THAN (TO_DATE(’01-JUL-2009′,’DD-MON-YYYY’)) TABLESPACE tsp2
,PARTITION month_07 VALUES LESS THAN (TO_DATE(’01-AUG-2009′,’DD-MON-YYYY’)) TABLESPACE tsp2
,PARTITION month_08 VALUES LESS THAN (TO_DATE(’01-SEP-2009′,’DD-MON-YYYY’)) TABLESPACE tsp2
,PARTITION month_09 VALUES LESS THAN (TO_DATE(’01-OCT-2009′,’DD-MON-YYYY’)) TABLESPACE tsp2
,PARTITION month_10 VALUES LESS THAN (TO_DATE(’01-NOV-2009′,’DD-MON-YYYY’)) TABLESPACE tsp2
,PARTITION month_11 VALUES LESS THAN (TO_DATE(’01-DEC-2009′,’DD-MON-YYYY’)) TABLESPACE tsp2
,PARTITION month_12 VALUES LESS THAN (TO_DATE(’01-JAN-2010′,’DD-MON-YYYY’)) TABLESPACE tsp2
)
/
REM Insert rows, but only up to 14-JUN-2009
INSERT INTO test(col_date_part_key,col2)
SELECT TO_DATE(’31-DEC-2008′,’DD-MON-YYYY’) + l
, LPAD(‘X’,2000,’X’)
FROM (SELECT level l FROM dual CONNECT BY level < 166)
/
COMMIT
/
SELECT COUNT(*)
FROM test
/
SELECT MIN(col_date_part_key) min_date
, MAX(col_date_part_key) max_date
FROM test
/

This runs and gives the following output:

DROP TABLE test PURGE                                               
           *                                                        
ERROR at line 1:                                                    
ORA-00942: table or view does not exist

DROP TABLESPACE tsp1 INCLUDING CONTENTS
*
ERROR at line 1:
ORA-00959: tablespace ‘TSP1’ does not exist

DROP TABLESPACE tsp2 INCLUDING CONTENTS
*
ERROR at line 1:
ORA-00959: tablespace ‘TSP2’ does not exist

Tablespace created.

Tablespace created.

Table created.

165 rows created.

Commit complete.

COUNT(*)
On the other hand, the herbal teas that contain Burdock extracts are specifically used for the treatment of acne, which is why it is considered as one of the most popular treatments for erectile dysfunction, sexual weakness, low stamina, check out for source purchase generic cialis low energy levels and nightfall. Brantingham performed chiropractic manipulations like graded axial elongation, mobilization of the online levitra sesamoids, adjustment of ankle and foot dysfunction, chiropractic adjustment of the first metatarsophalangeal joint, stretching exercises and lastly big toe and foot flexor strengthening exercises. These stop signs are actually directing you to come to the rescue at turbo speeds! Teen Dating Advice #5: Those parenting teens should give cialis viagra online their kids permission to blame them when they feel embarrassed about wanting out of a situation. Everyone is helpless over substances, individuals, places viagra without rx and things. ———-
165

MIN_DATE MAX_DATE
——— ———
01-JAN-09 14-JUN-09

Now, lets see what the plan looks like from AUTOTRACE when we run the following query to get the maximum value of COL_DATE_PART_KEY:

SQL> SET AUTOTRACE ON
SQL> SELECT MAX(col_date_part_key) min_date
  2  FROM   test                           
  3  /

MIN_DATE
———
14-JUN-09

Execution Plan
———————————————————-
Plan hash value: 784602781

———————————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
———————————————————————————————
| 0 | SELECT STATEMENT | | 1 | 9 | 99 (0)| 00:00:02 | | |
| 1 | SORT AGGREGATE | | 1 | 9 | | | | |
| 2 | PARTITION RANGE ALL| | 132 | 1188 | 99 (0)| 00:00:02 | 1 | 12 |
| 3 | TABLE ACCESS FULL | TEST | 132 | 1188 | 99 (0)| 00:00:02 | 1 | 12 |
———————————————————————————————

Note
—–
– dynamic sampling used for this statement

Statistics
———————————————————-
0 recursive calls
0 db block gets
320 consistent gets
51 physical reads
0 redo size
527 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> SET AUTOTRACE OFF

It shows a full scan of all twelve partitions. I figured that the the plan for such a query would show a full table scan, of all partitions for that table – because, in theory, if all but the first partition were empty, then the whole table would have to be scanned to answer the query – and Oracle wouldn’t know at plan creation time, whether the data met this case, so it would have to do the full table scan to ensure the correct result.

What I thought might happen though, is that in executing the query, it would be able to short circuit things, by working through the partitions in order, from latest to earliest, and finding the first, non null, value. Once it found the first, non null, value, it would know not to continue looking in the earlier partitions, since the value of COL_DATE_PART_KEY couldn’t possibly be greater than the non null value already identified.

It doesn’t appear to have this capability, which we can check by taking one of the partitions offline and then rerunning the query, whereupon it complains that not all the data is present…

SQL> ALTER TABLESPACE tsp1 OFFLINE;

Tablespace altered.

SQL> SET AUTOTRACE ON
SQL> SELECT MAX(col_date_part_key) min_date
2 FROM test
3 /
SELECT MAX(col_date_part_key) min_date
*
ERROR at line 1:
ORA-00376: file 6 cannot be read at this time
ORA-01110: data file 6: ‘/u01/app/oracle/oradata/T111/tsp1.dbf’

SQL> SET AUTOTRACE OFF

So, even though we know we could actually answer this question accurately, Oracle can’t do it as it wants to scan, unnecessarily, the whole table.

I did find a thread which somebody had asked about this on OTN, but all the responses were about workarounds, rather than explaining why this happens (bug/feature) or how it can be made to work in the way I, or the poster of that thread, think it, perhaps, should.

Can anyone else shed any light on this? If it’s a feature, then it seems like something that could be easily coded more efficiently by Oracle. The same issue would affect both MIN and MAX since both could be
approached in the same manner.

Scripts for loading up DBGEN TPC-H data

If you’re interested in creating a TPC-H schema for testing purposes, then the following scripts may be of use to you:

Unix Scripts:
Multi Load TPCH
Load TPCH Stream

SQL*Loader Control files:
REGION Table
NATION Table
SUPPLIER Table
CUSTOMER Table
PART Table
PARTSUPP Table
ORDERS Table
LINEITEM Table

You may wish to read and check them before you use them – they’re not exactly rocket science but they seem to do the job.

I have all the files in the same directory for simplicity sake.

I then use them to create a scale factor 1 target TPC-H schema using the following calls:

# “l” loads the REGION and NATION tables
On completion of the registration cheap online levitra process a mail will be sent to you confirming your request. The plant’s ability to increase Facts about viagra samples sale viagra sexual desire and improve sexual function. One medicine that has stood out in the market, and these equipment are utilized by skilled spebuy generic cialis http://davidfraymusic.com/events/meyerson-hall/ts in different fields. Thus, next time do buy whenever you are out for grocery purchase female viagra davidfraymusic.com shopping. ./multi_load_tpch.sh 1 l “tpch/tpch@test” 1

# “s” loads the SUPPLIER table
./multi_load_tpch.sh 1 s “tpch/tpch@test” 10

# “c” loads the CUSTOMER table
./multi_load_tpch.sh 1 c “tpch/tpch@test” 10

# “p” loads the PART and PARTSUPP tables
./multi_load_tpch.sh 1 p “tpch/tpch@test” 10

# “o” loads the ORDERS and LINEITEM tables
./multi_load_tpch.sh 1 o “tpch/tpch@test” 10

Obviously, you need to change the connection string to match your environment.

Caveats:

  1. Obviously, they are supplied as is – use at your own discretion and risk.
  2. You need to have created the target schema tables and made sure they are empty as the SQL*Loader control files use APPEND.
  3. Bear in mind that choosing too high a number of parallel streams (the last parameter in the calls) will overload your machine so try and balance it against the available system resources.

Bugs, issues or questions, please get in touch…enjoy.

Creating a TPC-H schema with DBGEN on HP-UX

I wanted to try out this HammerOra product from Steve Shaw, both at work and on my box at home…but after playing with it at home, I realised that it takes quite some time to build even a small (scale factor 1) TPC-H schema…I know it runs serially, but I’m still not quite sure why it’s that slow (on my system that is), but Steve does say it can take a while and that you might wish to consider using the TPC utility DBGEN to generate and load the schema quicker…particularly if you use some manual parallelisation.

Given that I also need to use this tool to help with some benchmarking at work, I decided to try to get DBGEN to run on a HP-UX box today and had one or two problems which I managed to sort out. The machine in question is an RP8420 running HP-UX B.11.11.

DBGEN is a utility that allows you to create a series of flat files which contain the data for a TPC-H schema. You can then use SQL*Loader to load these into appropriately constructed tables in an Oracle database – any database actually…but I only care about Oracle of course 😉

The utility can be called with various parameters including making the target datasets in smaller “child” files which can be created in a manually parallelised fashion to speed the whole process up. You have to download the DBGEN reference data set from the TPC website (lower right hand side).

This reference data set contains the ANSI C source code which makes the DBGEN executable (and QGEN also…but more on that another day)…unfortunately it’s just the source code, so that means you need to compile it yourself…which of course leads to the first problem…that I know diddly squat about C…yeah I know, not very manly! Luckily I can sometimes follow instructions (which come with the reference data set)…

1. Copy makefile.suite to makefile
2. Edit makefile and make the following amendments (in red):

 

################
## CHANGE NAME OF ANSI COMPILER HERE
################
CC = gcc
# Current values for DATABASE are: INFORMIX, DB2, TDAT (Teradata)
# SQLSERVER, SYBASE
# Current values for MACHINE are: ATT, DOS, HP, IBM, ICL, MVS,
# SGI, SUN, U2200, VMS, LINUX, WIN32
# Current values for WORKLOAD are: TPCH
DATABASE= ORACLE
MACHINE = HP
WORKLOAD = TPCH

That’s it for the makefile.

Now, as I mentioned, the ORACLE database is not a listed database variant in the DBGEN C code – it’s got all the other popular RDBMS which I find quite bizarre…I’m sure there’s a reason, but I can’t think of one. To get around this, as per Chapter 5 in “Pro Oracle Database 10g RAC on Linux” by Steve Shaw and Julian Dyke, I added a section to the tpcd.h for the ORACLE database:

 
#ifdef ORACLE
#define GEN_QUERY_PLAN “”
#define START_TRAN “”
#define END_TRAN “”
The pill is formulated using ingredients that are clinically tested to ensure they contain the same as the branded order viagra product used to treat erectile dysfunction or male impotence. Why the ED sufferers cost of viagra pills aspire to have sildenafil jelly rather than tablets? Let’s have a look at the major reasons to prescribe kamagra medicines to the ED patients. Erection difficulties buy levitra australia are also caused by the affect of surgery, including spinal, neurological or pelvic surgeries. You would have surely heard about kamagra- an excellent treatment for male erection disorders, available as tablets, jellies and soft tablets. bulk cialis #define SET_OUTPUT “”
#define SET_ROWCOUNT “”
#define SET_DBASE “”
#endif

I thought that was it – but it still would’t compile, giving the error:

config.h:213:2: #error Support for a 64-bit datatype is required in this release

Looking at the config.h – and bearing in mind I’m no C programmer – it struck me as odd that all bar the HP machine section, had stuff about DSS_HUGE and 64 bits…so I took a punt and copied some lines (in red) from the IBM section into the HP one to see if it worked…and it did. The HP section now looks like this:

 

#ifdef HP
#define _INCLUDE_POSIX_SOURCE
#define STDLIB_HAS_GETOPT
#define DSS_HUGE long long
#define HUGE_FORMAT “%lld”
#define HUGE_DATE_FORMAT “%02lld”
#define RNG_A 6364136223846793005ull
#define RNG_C 1ull
#endif /* HP */

Typing make at the command prompt then compiles the code and produces the dbgen executable…which I then spent a few hours playing with to create a scale factor 1 TPC-H set of files.

 

My next problem was one of my own making really in that I copied the CREATE TABLE statements for the TPC-H target tables from HammerOra’ TCL script for TPC-H creation, but unfortunately, the column ordering is slightly different in those DDL statements as compared to the DBGEN output files…which meant that I created the tables OK, but since I’d copied the column ordering to make the SQL*Loader control files, I got errors when I tried to load some of the files as the column order is different in one or two cases.

 

I then downloaded the TPC-H specification document which has, amongst other things, the data model, from which I cross checked the column ordering of the data model against the columns in the output files and then managed to rerun the data in without any further issues.

 

Tomorrow I’ll try running HammerOra against the target TPC-H schema and make some shell scripts to try and automate most of the process so we can build different scale factor schemae and do so in a manually parallelised fashion – scripts the amiable Scotsman created for his parallel testing a while back should give me a good start with that.

 

Tracking TEMP usage throughout the day

I really wanted to do this via AWR, but I’ve not been able to find out if this kind of information is stored and, if it is, how I’d get access to it…maybe someone else knows…hint hint?!

I have various queries I run whilst I’m online and processes are running, but what I really wanted, was to know the usage profile throughout the day…so, given that I couldn’t find an AWR way of tracking our use of TEMP on a database, I figured I’d use a more klunky method…

DROP TABLE mgmt_t_temp_use_history PURGE
/

CREATE TABLE temp_use_history(snap_date      DATE         NOT NULL
                             ,sid            NUMBER       NOT NULL
                             ,segtype        VARCHAR2(9)  NOT NULL
                             ,qcsid          NUMBER       NULL
                             ,username       VARCHAR2(30) NULL
                             ,osuser         VARCHAR2(30) NULL
                             ,contents       VARCHAR2(9)  NULL
                             ,sqlhash        NUMBER       NULL
                             ,sql_id         VARCHAR2(13) NULL
                             ,blocks         NUMBER       NULL
                             )
PCTFREE 0
COMPRESS
NOLOGGING
/
CREATE UNIQUE INDEX tuh_pk ON
temp_use_history(snap_date,sid,segtype)
PCTFREE 0
COMPRESS
NOLOGGING
/
ALTER TABLE temp_use_history ADD CONSTRAINT tuh_pk PRIMARY
KEY(snap_date,sid,segtype)
USING INDEX
/

DECLARE
  l_program_action VARCHAR2(2000);
  l_27477 EXCEPTION;
  PRAGMA EXCEPTION_INIT(l_27477,-27477); BEGIN
  BEGIN
    DBMS_SCHEDULER.CREATE_SCHEDULE(
      schedule_name   => 'MINUTELY_5M'
     ,start_date      => SYSDATE
     ,repeat_interval => 'FREQ=MINUTELY;INTERVAL=5'
     ,comments        =>'Daily schedule to run a job every five minutes.'
                                  );
  EXCEPTION
    WHEN l_27477 THEN
      NULL;  -- Ignore if the schedule exists
  END;

  l_program_action :=                   'DECLARE';
  l_program_action := l_program_action||'  l_date DATE := SYSDATE; ';
  l_program_action := l_program_action||'BEGIN';
  l_program_action := l_program_action||'  INSERT /*+ APPEND */ INTO temp_use_history(snap_date,sid,qcsid,username,osuser,contents,seg
type,sqlhash,sql_id,blocks)';
  l_program_action := l_program_action||'  SELECT l_date,s.sid,ps.qcsid,s.username,s.osuser,su.contents,su.segtype,su.sqlh
ash,su.sql_id,sum(su.blocks)';
  l_program_action := l_program_action||'  FROM   v$sort_usage su';
  l_program_action := l_program_action||'  ,      v$session s';
  l_program_action := l_program_action||'  ,      v$px_session ps';
  l_program_action := l_program_action||'  WHERE  s.sid=ps.sid(+)';
  l_program_action := l_program_action||'  AND    s.saddr =
su.session_addr';
  l_program_action := l_program_action||'  AND    s.serial# =
su.session_num';
  l_program_action := l_program_action||'  GROUP BY s.sid,ps.qcsid,s.username,s.osuser,su.contents,su.segtype,su.sqladdr,su.
sqlhash,su.sql_id;';
  l_program_action := l_program_action||'  COMMIT; ';
  l_program_action := l_program_action||'END;';

  BEGIN
    DBMS_SCHEDULER.CREATE_PROGRAM(
      program_name => 'SNAP_TEMP_USAGE'
Prior to the bill going into effect, the standard industry practice was to hike canada viagra  rates on consumers immediately after an infraction, such as a late payment. If you think that only medicine can improve the quality and prolong life of the individual depend upon understanding the situation and insistent and persistent actions to prevent deteriorations'. canadian pharmacy tadalafil djpaulkom.tv This is absolutely  buying viagra on line unfounded since this pill is a treatment for erectile dysfunction. Treatment for male sensual problems: Erectile dysfunction is not something that online viagra  most people like to talk about in the open.      ,program_type => 'PLSQL_BLOCK'
     ,program_action => l_program_action
     ,enabled        => TRUE
     ,comments       => 'Program to snap the temp usage into TEMP_USE_HISTORY'
                                 );
  EXCEPTION
    WHEN l_27477 THEN
      NULL;  -- Ignore if the program exists
  END;

  BEGIN
    DBMS_SCHEDULER.CREATE_JOB(
      job_name      => 'JOB_SNAP_TEMP_USAGE'
     ,program_name  => 'SNAP_TEMP_USAGE'
     ,schedule_name => 'MINUTELY_5M'
     ,enabled       => TRUE
     ,auto_drop     => FALSE
     ,comments      => 'Job to snap the temp usage into TEMP_USE_HISTORY'
                             );
  EXCEPTION
    WHEN l_27477 THEN
      NULL;  -- Ignore if the job exists
  END;
END;
/

I can now run queries against the TEMP_USE_HISTORY table to show how much TEMP has been used, when, by whom and for what use, e.g.

SQL> ed
Wrote file afiedt.buf

  1  select snap_date,round(sum(blocks)*32/1024/1024) gb
  2  from temp_use_history
  3  where snap_date > sysdate-1
  4  group by snap_date
  5  having round(sum(blocks)*32/1024/1024) > 50
  6* order by 1
SQL> /

SNAP_DATE                    GB
-------------------- ----------
26-FEB-2008 16:12:25         57
26-FEB-2008 16:17:25         65
26-FEB-2008 16:22:25         74
26-FEB-2008 16:27:25         86
26-FEB-2008 16:32:25         95
26-FEB-2008 16:37:25        107
26-FEB-2008 16:42:25        121
26-FEB-2008 16:47:25        127
26-FEB-2008 16:52:25        147
26-FEB-2008 16:57:25        160
26-FEB-2008 17:02:25        162
26-FEB-2008 17:07:25        179
26-FEB-2008 17:12:25        196
26-FEB-2008 17:17:25        208
26-FEB-2008 17:22:25        217
26-FEB-2008 17:27:25        233
26-FEB-2008 17:32:25        241
26-FEB-2008 17:37:25        251
26-FEB-2008 17:42:25        257
26-FEB-2008 17:47:25        262
26-FEB-2008 17:52:25        264
26-FEB-2008 17:57:25        267
26-FEB-2008 18:02:25        201
27-FEB-2008 00:27:25         59
27-FEB-2008 00:32:25         60
27-FEB-2008 00:37:25         69
27-FEB-2008 01:12:25         57
27-FEB-2008 02:22:25         53
27-FEB-2008 09:57:25         51

29 rows selected.

From the above, I can see that, during the last twenty four hours on the database in question, there was reasonably heavy use of the TEMP area between 4pm and 6pm yesterday, and that the load peaked at approximately 267Gb.

Getting multi row text items from Designer repository

I’ve mentioned Lucas Jellema before when talking about Oracle Designer – he’s helped me out again today via this post on the AMIS blog.

What I wanted, was to be able to extract, using a SQL query against the designer repository, the “Description” attribute from our Tables so that I could create table comments in the database which had the description from the table in Designer.

Extracting scalar attributes is simple enough, however, description is a multi row text property so it needed a bit more thought…and rather than reinvent the wheel, I had a look at Lucas’ stuff and sure enough found the post above. It was almost what I wanted, only I had to change it to look for table stuff rather than entities and attributes.

I used the same TYPE and sum_string_table function as Lucas so if you’re trying to use this stuff you’ll probably need to read Lucas’ article first.

The query I ended up with is below…it’s been “sanitized” somewhat, but I’m sure you’d get the picture, if retrieving stuff out of the designer repository is a requirement of yours.

WITH dsc AS
(
SELECT txt.txt_text
,      txt.txt_ref
FROM   cdi_text txt
WHERE  txt.txt_type = 'CDIDSC'
)
, add_cast AS
(
SELECT appsys.name application_system_name
,      b.name table_name
,      b.alias
,      CAST(COLLECT(dsc.txt_text) AS string_table) tab_description
FROM   dsc
,      designer.ci_application_systems appsys
With the intake of capsules the massage of Mast Mood oil is very fruitful to solve the purpose of impotence or erectile cheap tadalafil pills  dysfunction in all the men. cheapest cialis without prescription Simmering arguments, poor communication, betrayal of trust, and other barriers to intimacy can turn off your sex drive. Diabetic  discount viagra men are more prone to impotence than non-diabetic men. There are certain side effects that get cialis online  the pill holds. ,      designer.ci_app_sys_tables a
,      designer.ci_table_definitions b
WHERE  dsc.txt_ref = a.table_reference
AND    b.irid = a.table_reference
AND    a.parent_ivid = appsys.ivid
GROUP BY appsys.name 
,        b.name
,        b.alias
)
SELECT application_system_name
,      table_name
,      alias
,      sum_string_table(tab_description)
FROM   add_cast
WHERE  application_system_name = 'MY_APP_SYS'
and    table_name = 'MY_TABLE_NAME'
/

Thanks Lucas!

On another note, regular visitors may realise I’ve now got my own oramoss dot com domain and that my blogger blog is published on that domain now.

Thanks to Andreas Viklund for the template.

If anyone sees anything untoward with the new site please feel free to drop me a note. It’s a bit thin on content but I’ll work on that over time.

ORA-07455 and EXPLAIN PLAN…and statements which, perhaps, shouldn’t run

I encountered a scenario today which I thought was strange in a number of ways…hence, irresistible to a quick blog post.

The scenario started with an end user of my warehouse emailing me a query that was returning an error message dialog box, warning the user before they ran the query, that they had insufficient resources to run said query – ORA-07455 to be precise.

I figured, either the query is one requiring significant resources – more resources than the user has access to, or the query has a suboptimal plan, whereby it thinks it will require more resources than they have access to.

To try and determine which, I logged into the same Oracle user as my end user and tried to get an explain plan of the query – so I could perhaps gauge whether there were any problems with the choice of execution path and whether the query was one which would indeed require significant resources.

The result was that it came back with the same error – which quite surprised me at first.

In using EXPLAIN PLAN, I wasn’t asking the database to actually run the query – merely to tell me what the likely execution path was for the query and yet, it appears to still do the checks on resource usage. At first, that seemed strange to me, in the sense that I wouldn’t be requiring those resources since, I’m not actually executing the statement, yet perhaps it does makes sense – or at least is consistent, because, for example, you don’t need access to all the objects in the query if you’re not going to actually execute it, yet quite rightly, the optimizer does checks as to whether you have the appropriate access permissions to each object as part of the EXPLAIN PLAN process.

That was educational point number one for me.

After logging in as another user with unlimited resource usage, I then reran the EXPLAIN PLAN and the statement was accepted and the plan returned…indicating an unpleasant rewrite of the query, and a very high anticipated cost – in excess of the limit for that end user.

That explained why the ORA-07455 was appearing for them, but highlighted an altogether different issue which perplexed me further. There follows a simple reconstruction of the query and explain plan results:

First the obligatory test script…

 

SET TIMING OFF
DROP TABLE tab1 PURGE
/
CREATE TABLE tab1
(col1 VARCHAR2(1))
/
DROP TABLE tab2 PURGE
/
CREATE TABLE tab2
(col2 VARCHAR2(1))
/
BEGIN
DBMS_STATS.GATHER_TABLE_STATS(ownname => USER
                            ,tabname => 'TAB1'
                            );
DBMS_STATS.GATHER_TABLE_STATS(ownname => USER
                            ,tabname => 'TAB2'
                            );
END;
/
INSERT INTO tab1 VALUES('A')
/
INSERT INTO tab1 VALUES('B')
/
INSERT INTO tab1 VALUES('A')
/
INSERT INTO tab1 VALUES('B')
/
INSERT INTO tab2 VALUES('C')
/
INSERT INTO tab2 VALUES('D')
/
COMMIT
/
SET AUTOTRACE ON
SELECT *
FROM tab1
WHERE col1 IN (SELECT col1 FROM tab2)
/
SET AUTOTRACE OFF

 

Now the results…

 

Table dropped.

Connected.

Table dropped.


Table created.


Table dropped.


Table created.


PL/SQL procedure successfully completed.


1 row created.

These tablets primarily work in the same commander levitra  in a better manner. Sildenafil citrate also helps with the production of cGMP enzymes which lead for the order cheap levitra devensec.com effective promotion of the blood vessels and capillaries cuts down the blood supply towards the male genital organ. These medications amplify that signal, allowing men to function naturally. look these up levitra on line This therapy is tailed with some common side effects of testosterone and anabolic 5mg cialis tablets  steroids are: increased blood pressure, increased cholesterol, acne, hair loss, structural changes in the bile duct, pancreas, and sphincter of Oddi. 
1 row created.


1 row created.


1 row created.


1 row created.


1 row created.


Commit complete.


C
-
A
B
A
B

4 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 4220095845

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time
|
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     1 |     2 |     4   (0)|00:00:01  |
|*  1 |  FILTER             |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL | TAB1 |     1 |     2 |     2   (0)|00:00:01  |
|*  3 |   FILTER            |      |       |       |            |          |
|   4 |    TABLE ACCESS FULL| TAB2 |     1 |       |     2   (0)|00:00:01  |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter( EXISTS (SELECT /*+ */ 0 FROM "TAB2" "TAB2" WHERE
           :B1=:B2))
3 - filter(:B1=:B2)


Statistics
----------------------------------------------------------
       1  recursive calls
       0  db block gets
      18  consistent gets
       0  physical reads
       0  redo size
     458  bytes sent via SQL*Net to client
     381  bytes received via SQL*Net from client
       2  SQL*Net roundtrips to/from client
       0  sorts (memory)
       0  sorts (disk)
       4  rows processed

 

Now, when I first saw the query I thought, hang on a minute, COL1 does not exist in table TAB2 so this query should not even execute…but it does! I don’t think it should execute personally but according to the documentation, “Oracle resolves unqualified columns in the subquery by looking in the tables named in the subquery and then in the tables named in the parent statement.“, so it is operating as described in the manuals- even if, in my view, it’s a little odd since without a rewrite, the query is incapable of executing.

The query has been rewritten with an EXISTS approach – note the first FILTER statement in the “Predicate Information” section of the autotrace output. A bit like this:

 

SELECT a.*
FROM   tab1 a
WHERE EXISTS (SELECT 0
           FROM   tab2 b
           WHERE  a.col1 = a.col1
          )
/

 

The subquery is always going to return a row, hence, for any row we select in the containing query, we will always get that row back, because the EXISTS will always find a match – it’s a bit like saying “WHERE TRUE” I guess.

Interestingly, my friend Jon first brought this scenario to my attention last week in various discussions with him and another of my colleagues, who is far more experienced than myself. To be fair, the experienced colleague is the source of a number of my blogging posts, but he’s painfully shy and will therefore remain nameless.

I was educated during that discussion, that this functionality is as advertised in the manuals – even if it doesn’t sit well with me. My closing line to my fellow debaters at the time, was that nobody would ever write SQL like that and if they did I’d tell them to rewrite it using aliases and so that it made sense – as is often the case in life though, the very next week, a real life user comes up with exactly that scenario – at least I was prepared!

Using AWR to summarise SQL operations activity

I was asked recently how much “use” a database had – a non too specific question but basically the person asking it wanted to know how many queries, DMLs, DDLs etc.. were run against the database on a daily basis…kind of like the queries per hour metrics that are sometimes quoted on TPC benchmarks I guess.

I didn’t have an answer as to where to get this information so I figured it warranted a ramble through the AWR tables to try and come up with something…

This isn’t going to be a description of how the AWR works – there are manuals for that. Instead, I’ll just give you the salient points from my exploration.

The AWR data, including it’s history, is accessible by looking at a collection of DBA Views named DBA_HIST%.

Part of the AWR processing captures Active Session History (ASH) data which is then accessible via the DBA_HIST_ACTIVE_SESS_HISTORY view. From this view we can obtain, for any given snap point in time, the sessions which were active, together with details of what they were doing and the statistics relating to the performance of that activity.

On this view is a column called SQL_OPCODE which tells us the type of SQL operation being performed by the session, with various codes indicating different things. I didn’t manage to find a definitive list of these SQL_OPCODES in a table/view anywhere so, by trial and error I worked them out as follows:

SQL_OPCODE    SQL Operation
1             DDL
2             INSERT
3             Query
6             UPDATE
7             DELETE
47            PL/SQL package call
50            Explain Plan
189           MERGE

…I’m sure this isn’t definitive…someone really smart out there will probably know where this list is actually stored and will, perhaps, put it on a comment to this post 😉

OK, now to code up something that goes back over this AWR data for the last, say seven days, and group it up by SQL operation so we can see what our activity has been – I used a ROW_NUMBER analytic (cos they rock) to make sure we only get the latest snapshot row from the AWR table – if a SQL operation is running for a long time then it may appear in the AWR view more than once…and we don’t want to double count it as far as executions go.

I’ve grouped up those sessions which don’t have an active SQL statement associated with them (i.e. SQL_ID is NULL) into groups based on their Object Type and I’ve then used ROLLUP to get the results for each SQL operation by day and also the summed up totals by operation across the seven days and a grand total.

I’m not actually sure why some rows in DBA_HIST_ACTIVE_SESS_HISTORY don’t have a SQL_ID against them – if they were not actually running a statement in the session then that sounds fair enough…but some of them had values in PLSQL_ENTRY_OBJECT_ID indicating (I think) that they were running a specific package (DBMS_STATS or DBMS_SPACE in many cases) so the fact they didn’t have a SQL_ID was confusing to me – perhaps it means they’re in that package but not actually running a query at the time of the snap – either way, they’re grouped up separately.

Here is a SQL*Plus spool of the code and results from a test system:

x_j4134[543/9757]@AED52> l
1 WITH ash AS
2 (
3 SELECT TRUNC(ash.sample_time) sample_day
4 , (CASE WHEN ash.sql_opcode = 47
5 THEN ‘PL/SQL’
6 WHEN ash.sql_opcode IN(1)
7 THEN ‘DDL’
8 WHEN ash.sql_opcode IN(2,6,7,189)
9 THEN ‘DML’
10 WHEN ash.sql_opcode IN(50)
11 THEN ‘Explain Plan’
12 WHEN ash.sql_opcode IN(3)
13 THEN ‘Query’
14 ELSE ‘No Statement ID; In object type: ‘NVL(o.object_type,’Not Specified’)
15 END) statement_type
16 , ROW_NUMBER() OVER(PARTITION BY ash.sql_id,ash.sql_child_number ORDER BY ash.sample_time DESC) rn
17 FROM dba_hist_snapshot s
18 , dba_hist_active_sess_history ash
19 , dba_objects o
20 WHERE s.snap_id = ash.snap_id(+)
21 AND s.dbid = ash.dbid(+)
22 AND s.instance_number = ash.instance_number(+)
23 AND ash.plsql_entry_object_id = o.object_id(+)
24 AND TRUNC(ash.sample_time) BETWEEN TRUNC(SYSDATE-6) AND TRUNC(SYSDATE+1) — all within last 7 days
25 )
26 SELECT sample_day
27 , statement_type
28 , COUNT(1)
29 FROM ash
30 WHERE rn = 1
31 GROUP BY ROLLUP(sample_day)
32 , ROLLUP(statement_type)
33 ORDER BY sample_day
34* , statement_type
x_j4134[543/9757]@AED52> /

SAMPLE_DAY STATEMENT_TYPE COUNT(1)
——————– —————————————————- ———-
02-FEB-2007 00:00:00 DDL 112
02-FEB-2007 00:00:00 DML 49
02-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 292
Now there is good news is that http://appalachianmagazine.com/category/news-headlines/page/6/ cialis 40 mg impotency caused by cycling is chiefly temporary. This is why these capsules are stated wholesale cialis price as safe diabetes supplements. This medicine starts working within generic cialis viagra half an hour to exhibit its consequences; if the man is sexually elicited. And, tadalafil 5mg no prescription as I have already commented, Voice Broadcasting is one sure fire way of bringing a voice and a face to the relationship. 02-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 7
02-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 131
02-FEB-2007 00:00:00 PL/SQL 301
02-FEB-2007 00:00:00 Query 181
02-FEB-2007 00:00:00 1073
03-FEB-2007 00:00:00 DDL 20
03-FEB-2007 00:00:00 DML 26
03-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 91
03-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 2
03-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 84
03-FEB-2007 00:00:00 PL/SQL 166
03-FEB-2007 00:00:00 Query 12
03-FEB-2007 00:00:00 401
04-FEB-2007 00:00:00 DDL 127
04-FEB-2007 00:00:00 DML 14
04-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 410
04-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 305
04-FEB-2007 00:00:00 PL/SQL 306
04-FEB-2007 00:00:00 Query 14
04-FEB-2007 00:00:00 1176
05-FEB-2007 00:00:00 DDL 115
05-FEB-2007 00:00:00 DML 81
05-FEB-2007 00:00:00 Explain Plan 1
05-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 261
05-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 16
05-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 161
05-FEB-2007 00:00:00 PL/SQL 315
05-FEB-2007 00:00:00 Query 360
05-FEB-2007 00:00:00 1310
06-FEB-2007 00:00:00 DDL 98
06-FEB-2007 00:00:00 DML 86
06-FEB-2007 00:00:00 Explain Plan 2
06-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 212
06-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 16
06-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 108
06-FEB-2007 00:00:00 PL/SQL 299
06-FEB-2007 00:00:00 Query 439
06-FEB-2007 00:00:00 1260
07-FEB-2007 00:00:00 DDL 98
07-FEB-2007 00:00:00 DML 162
07-FEB-2007 00:00:00 Explain Plan 1
07-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 210
07-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 24
07-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 96
07-FEB-2007 00:00:00 PL/SQL 337
07-FEB-2007 00:00:00 Query 348
07-FEB-2007 00:00:00 1276
08-FEB-2007 00:00:00 DDL 112
08-FEB-2007 00:00:00 DML 420
08-FEB-2007 00:00:00 Explain Plan 1
08-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 311
08-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 25
08-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 119
08-FEB-2007 00:00:00 PL/SQL 572
08-FEB-2007 00:00:00 Query 493
08-FEB-2007 00:00:00 2053
DDL 682
DML 838
Explain Plan 5
No Statement ID; In object type: Not Specified 1787
No Statement ID; In object type: PACKAGE 90
No Statement ID; In object type: PROCEDURE 1004
PL/SQL 2296
Query 1847
8549

68 rows selected.

Elapsed: 00:09:49.44

It was just an idea…if anyone has anything to add/improve it feel free to comment.