Category: Oracle

TPC-H Query 20 and optimizer_dynamic_sampling

I was working with Jason Garforth today on creating a TPC-H benchmark script which we can run on our warehouse to initially get a baseline of performance, and then, from time to time, rerun it to ensure things are still running with a comparable performance level.

This activity was on our new warehouse platform of an IBM Power 6 p570 with 8 dual core 4.7GHz processors, 128GB RAM and a 1.6GB/Sec SAN.

Jason created a script to run the QGEN utility to generate the twenty two queries that make up the TPC-H benchmark and also a “run script” to then run those queries against the target schema I had created using some load scripts I talked about previously.

The whole process seemed to be running smoothly with queries running through in a matter of seconds, until query twenty went off the scale and started taking ages. Excluding the 20th query, everything else went through in about three to four minutes, but query twenty was going on for hours, with no sign of completing.

We grabbed the actual execution plan and noticed that all the tables involved had no stats gathered. In such circumstances, Oracle (10.2.0.4 in this instance) uses dynamic sampling to take a quick sample of the table in order to come up with an optimal plan for each query executed.

The database was running with the default value of 2 for optimizer_dynamic_sampling.

After reading the TPC-H specification, it doesn’t say that stats should or should not be gathered, but obviously in gathering them, there would be a cost to doing so and, depending on the method of gathering and the volume of the database, the cost could be considerable. It would be interesting to hear from someone who actually runs audited TPC-H benchmarks to know whether they gather table stats or whether they use dynamic sampling…

We decided we would gather the stats, just to see if the plan changed and the query executed any faster…it did, on both counts, with the query finishing very quickly, inline with the other twenty one queries in the suite.

So, our options then appeared to include, amongst other things:

    So, always read NF Cure and Vital M-40 capsules review before paying money for it, because there are too many products available in the market, which come at competitive prices to enable the people to avail these at an affordable cost. viagra for women price http://raindogscine.com/?attachment_id=300 Drink plenty of water to viagra for women price help flush away acidic waste products in the muscles can cause muscle irritation and pain. There were buy viagra online two manufacturers of chemicals which were closed with the help of authorities from both China and India. A thorough sexual history and assessment sildenafil bulk of overall health is important in pinpointing the problem.

  1. Gather the table stats. We’d proved this worked.
  2. Change the optimizer_dynamic_sampling level to a higher value and see if it made a difference.
  3. Manually, work out why the plan for the query was wrong, by analysis of the individual plan steps in further detail and then use hints or profiles to force the optimizer to “do the right thing”.

We decided to read a Full Disclosure report of a TPC-H benchmark for a similar system to see what they did. The FDR included a full listing of the init.ora of the database in that test. The listing showed that the system in question had set optimizer_dyamic_sampling to 3 instead of the default 2…we decided to try that approach and it worked perfectly.

In the end, given we’re not producing actual audited benchmarks then we’re free to wait for the gathering of optimizer stats, so we’ll go with that method, but it was interesting to see that option 2 above worked as well and illustrates the point that there is a lot of useful information to be gleaned from reading the FDRs of audited benchmarks – whilst, of course, being careful to read them with a pinch of salt, since they are not trying to run your system.

Another thing of interest was that in order to get the DBGEN utility to work on AIX 6.1 using the gcc compiler, we had to set an environment variable as follows otherwise we got an error when running DBGEN (also applies to QGEN too):

Set this:

export LDR_CNTRL=MAXDATA=0x80000000@LARGE_PAGE_DATA=Y

otherwise you may get this:

exec(): 0509-036 Cannot load program dbgen because of the following errors:
0509-026 System error: There is not enough memory available now.

Oracle Optimized Warehouse Initiative (OWI)

I enjoyed a trip out of the office today with my manager. We went down to the Oracle Enterprise Technology Centre, in Reading, to hear about the Oracle Optimized Warehouse Initiative. It was basically a half day pitch from Oracle and Sun today, although there are other dates with different hardware vendors (IBM, HP and Dell).

It was an interesting day, although I didn’t really hear anything new, per se. I think the main things I took away from the session were:

  • The availability of a set of reference configurations providing matrices which cover various permutations of user count, storage cost focus, warehouse size and hardware vendor.
  • The possibility of using the reference configurations either minimally, to simply cross check a proposed hardware specification for a given workload, to ensure it seems appropriate, or going the whole hog and using an “out of the box” reference configuration, delivered to your office, fully configured with all software installed, in a box, ready to run in your data.
  • Oracle are pushing RAC and ASM heavily in the DW space – no surprise there.
  • HammerOra and ORION are used by Oracle and the hardware vendors to assess the reference configurations…and there is nothing stopping you using them for your own benchmarking efforts

Causes Of ED Erectile dysfunction can occur in man at any buy levitra thought about that age, but it is absolutely worth it. If you think that your relationship is falling and a lot of misunderstanding has been created between you and viagra 100mg mastercard your partner, then sex will dispel these doubts. When cheap levitra uk was no longer patented, a number of companies started producing generic pills that offered the same benefits as levitra at the affordable price helped ED patients to avail an easy erection and successful sexual interaction. If you go onto the website below, you will be able a smart gas saver. ) viagra samples canada The primary tip is to stop filling the fuel tank, as soon as the tank indicates to be full.
It was interesting to hear about the Proof Of Concept facility that Sun has, in Linlithgow, Scotland. The facility allows Sun (and an Oracle customer) to take a server off the production line and, working with the customer, test their workload on that machine to see if it’s going to work or not. Neat, and since we’re going to be using some Sun kit for a DW shortly, it sounds like an opportunity.

Funniest thing of the day for me, was the last slide in the pitch given by Mike Leigh of Sun which had the title “The Power of 2” and was illustrating the benefits to customers of Oracle and Sun as a united force. I didn’t really take much notice, as I was too busy smiling, as I looked at the title and it made me think of Doug and his Parallel Execution and the ‘Magic of 2’ paper (the Magic of 2 bit actually being from this paper by Cary).

If you’re building a warehouse, or just want to get an idea of whether your hardware is appropriate for the job, it’s probably worth reading up on the OWI.

Problem with _gby_hash_aggregation_enabled parameter

Here’s a tale about an Oracle initialisation parameter…and a lesson we should all take note of…

For three days, my colleagues in the support team on one of the warehouses I’m involved in, were struggling with a piece of code which was exhausting the available temp space and after trying everything they could think of, they asked me to take a look. I must admit I was a little baffled at first because the piece of code in question had been happily running for some time now and every time I’d run it, I’d never noticed that TEMP was anywhere near being exhausted so, whilst I could tell the process had some kind of problem, I was in the dark as to exactly what that problem was.

After trying to break down the large query into several smaller steps, I realised that it was an early step in the query that was exhibiting the problem of temp exhaustion – the step being a pivot of around 300 million measure rows into a pivot target table.

This process runs monthly and it had run without issue on all of the previous 20 occurrences and for roughly the same, if not more rows to pivot…so it didn’t appear to be the volume that was causing the problem.

There had been no changes in the code itself for many months and the current version of the code had been run successfully in previous months, so it didn’t appear to be a code fault.

I obtained the actual execution path for the statement whilst it was running and it looked reasonable, although looking at it, triggered a thought in my mind…what if something had changed in the database configuration?

Why would I get that thought from looking at the execution path?

Well, a while back, we had received some advice that a line in our init.ora as follows, should be changed to set the feature to TRUE instead of FALSE, so that the feature became active:

_gby_hash_aggregation_enabled = FALSE

This results in a line in the plan that reads:
It is indicated in conditions like spermatorrhea, premature ejaculation, neuralgia, incontinence, chronic diarrhea etc. find out for source now levitra cost of Walking is one of cialis mastercard the most effective ways to get your sexual health normalized. This situation isn’t really harmful or discount online viagra debilitating, but it could influence sexual lives of couples. Several of the order levitra online pdxcommercial.com well-known herbs being widely used in the manufacture of sexual tonic.
HASH GROUP BY

instead of

SORT GROUP BY

The parameter was set to FALSE due to a known bug and the issues we’d seen with it, however the recent advice we’d received, indicated that the bug had been resolved at the version level we were on and that by enabling the feature – which enables GROUP BY and Aggregation using a hash scheme – we’d gain a performance boost for certain queries.

So, the DBA team researched the advice and it appeared to be the case, that the bug (4604970) which led to the disabling of the feature was fixed at our version level (10.2.0.3 on HP-UX). We duly turned on the feature in a pre production environment and ran it for a while without noticing any issues. We then enabled it in production and again, for a while, we’ve not noticed any issues…until now.

After a check back through the logs, it appeared that since the parameter was first enabled, the queries which were now failing, had not been run at all…they had only run prior to the parameter change…so with my suspicions aroused further, I disabled the feature at the session level and reran the process. It completed in a normal time frame and used a small amount of TEMP – hooray!

So, now we have to go back to support to try and understand if the original bug is not quite fixed or whether this is a different scenario…in any event, we’re going to disable the feature for now, even though we’re only getting problems with the feature on 2 processes out of perhaps thousands.

So, what’s the lesson to learn?

Well, quite simply, that you need to have a thorough audit of what configuration changes you’ve made together with a good audit of the processes you’ve run so that you can work out what has changed since the last time you successfully ran a process. This gives you a fighting chance of spotting things like the above.

Tracking TEMP usage throughout the day

I really wanted to do this via AWR, but I’ve not been able to find out if this kind of information is stored and, if it is, how I’d get access to it…maybe someone else knows…hint hint?!

I have various queries I run whilst I’m online and processes are running, but what I really wanted, was to know the usage profile throughout the day…so, given that I couldn’t find an AWR way of tracking our use of TEMP on a database, I figured I’d use a more klunky method…

DROP TABLE mgmt_t_temp_use_history PURGE
/

CREATE TABLE temp_use_history(snap_date      DATE         NOT NULL
                             ,sid            NUMBER       NOT NULL
                             ,segtype        VARCHAR2(9)  NOT NULL
                             ,qcsid          NUMBER       NULL
                             ,username       VARCHAR2(30) NULL
                             ,osuser         VARCHAR2(30) NULL
                             ,contents       VARCHAR2(9)  NULL
                             ,sqlhash        NUMBER       NULL
                             ,sql_id         VARCHAR2(13) NULL
                             ,blocks         NUMBER       NULL
                             )
PCTFREE 0
COMPRESS
NOLOGGING
/
CREATE UNIQUE INDEX tuh_pk ON
temp_use_history(snap_date,sid,segtype)
PCTFREE 0
COMPRESS
NOLOGGING
/
ALTER TABLE temp_use_history ADD CONSTRAINT tuh_pk PRIMARY
KEY(snap_date,sid,segtype)
USING INDEX
/

DECLARE
  l_program_action VARCHAR2(2000);
  l_27477 EXCEPTION;
  PRAGMA EXCEPTION_INIT(l_27477,-27477); BEGIN
  BEGIN
    DBMS_SCHEDULER.CREATE_SCHEDULE(
      schedule_name   => 'MINUTELY_5M'
     ,start_date      => SYSDATE
     ,repeat_interval => 'FREQ=MINUTELY;INTERVAL=5'
     ,comments        =>'Daily schedule to run a job every five minutes.'
                                  );
  EXCEPTION
    WHEN l_27477 THEN
      NULL;  -- Ignore if the schedule exists
  END;

  l_program_action :=                   'DECLARE';
  l_program_action := l_program_action||'  l_date DATE := SYSDATE; ';
  l_program_action := l_program_action||'BEGIN';
  l_program_action := l_program_action||'  INSERT /*+ APPEND */ INTO temp_use_history(snap_date,sid,qcsid,username,osuser,contents,seg
type,sqlhash,sql_id,blocks)';
  l_program_action := l_program_action||'  SELECT l_date,s.sid,ps.qcsid,s.username,s.osuser,su.contents,su.segtype,su.sqlh
ash,su.sql_id,sum(su.blocks)';
  l_program_action := l_program_action||'  FROM   v$sort_usage su';
  l_program_action := l_program_action||'  ,      v$session s';
  l_program_action := l_program_action||'  ,      v$px_session ps';
  l_program_action := l_program_action||'  WHERE  s.sid=ps.sid(+)';
  l_program_action := l_program_action||'  AND    s.saddr =
su.session_addr';
  l_program_action := l_program_action||'  AND    s.serial# =
su.session_num';
  l_program_action := l_program_action||'  GROUP BY s.sid,ps.qcsid,s.username,s.osuser,su.contents,su.segtype,su.sqladdr,su.
sqlhash,su.sql_id;';
  l_program_action := l_program_action||'  COMMIT; ';
  l_program_action := l_program_action||'END;';

  BEGIN
    DBMS_SCHEDULER.CREATE_PROGRAM(
      program_name => 'SNAP_TEMP_USAGE'
Prior to the bill going into effect, the standard industry practice was to hike canada viagra  rates on consumers immediately after an infraction, such as a late payment. If you think that only medicine can improve the quality and prolong life of the individual depend upon understanding the situation and insistent and persistent actions to prevent deteriorations'. canadian pharmacy tadalafil djpaulkom.tv This is absolutely  buying viagra on line unfounded since this pill is a treatment for erectile dysfunction. Treatment for male sensual problems: Erectile dysfunction is not something that online viagra  most people like to talk about in the open.      ,program_type => 'PLSQL_BLOCK'
     ,program_action => l_program_action
     ,enabled        => TRUE
     ,comments       => 'Program to snap the temp usage into TEMP_USE_HISTORY'
                                 );
  EXCEPTION
    WHEN l_27477 THEN
      NULL;  -- Ignore if the program exists
  END;

  BEGIN
    DBMS_SCHEDULER.CREATE_JOB(
      job_name      => 'JOB_SNAP_TEMP_USAGE'
     ,program_name  => 'SNAP_TEMP_USAGE'
     ,schedule_name => 'MINUTELY_5M'
     ,enabled       => TRUE
     ,auto_drop     => FALSE
     ,comments      => 'Job to snap the temp usage into TEMP_USE_HISTORY'
                             );
  EXCEPTION
    WHEN l_27477 THEN
      NULL;  -- Ignore if the job exists
  END;
END;
/

I can now run queries against the TEMP_USE_HISTORY table to show how much TEMP has been used, when, by whom and for what use, e.g.

SQL> ed
Wrote file afiedt.buf

  1  select snap_date,round(sum(blocks)*32/1024/1024) gb
  2  from temp_use_history
  3  where snap_date > sysdate-1
  4  group by snap_date
  5  having round(sum(blocks)*32/1024/1024) > 50
  6* order by 1
SQL> /

SNAP_DATE                    GB
-------------------- ----------
26-FEB-2008 16:12:25         57
26-FEB-2008 16:17:25         65
26-FEB-2008 16:22:25         74
26-FEB-2008 16:27:25         86
26-FEB-2008 16:32:25         95
26-FEB-2008 16:37:25        107
26-FEB-2008 16:42:25        121
26-FEB-2008 16:47:25        127
26-FEB-2008 16:52:25        147
26-FEB-2008 16:57:25        160
26-FEB-2008 17:02:25        162
26-FEB-2008 17:07:25        179
26-FEB-2008 17:12:25        196
26-FEB-2008 17:17:25        208
26-FEB-2008 17:22:25        217
26-FEB-2008 17:27:25        233
26-FEB-2008 17:32:25        241
26-FEB-2008 17:37:25        251
26-FEB-2008 17:42:25        257
26-FEB-2008 17:47:25        262
26-FEB-2008 17:52:25        264
26-FEB-2008 17:57:25        267
26-FEB-2008 18:02:25        201
27-FEB-2008 00:27:25         59
27-FEB-2008 00:32:25         60
27-FEB-2008 00:37:25         69
27-FEB-2008 01:12:25         57
27-FEB-2008 02:22:25         53
27-FEB-2008 09:57:25         51

29 rows selected.

From the above, I can see that, during the last twenty four hours on the database in question, there was reasonably heavy use of the TEMP area between 4pm and 6pm yesterday, and that the load peaked at approximately 267Gb.

Getting multi row text items from Designer repository

I’ve mentioned Lucas Jellema before when talking about Oracle Designer – he’s helped me out again today via this post on the AMIS blog.

What I wanted, was to be able to extract, using a SQL query against the designer repository, the “Description” attribute from our Tables so that I could create table comments in the database which had the description from the table in Designer.

Extracting scalar attributes is simple enough, however, description is a multi row text property so it needed a bit more thought…and rather than reinvent the wheel, I had a look at Lucas’ stuff and sure enough found the post above. It was almost what I wanted, only I had to change it to look for table stuff rather than entities and attributes.

I used the same TYPE and sum_string_table function as Lucas so if you’re trying to use this stuff you’ll probably need to read Lucas’ article first.

The query I ended up with is below…it’s been “sanitized” somewhat, but I’m sure you’d get the picture, if retrieving stuff out of the designer repository is a requirement of yours.

WITH dsc AS
(
SELECT txt.txt_text
,      txt.txt_ref
FROM   cdi_text txt
WHERE  txt.txt_type = 'CDIDSC'
)
, add_cast AS
(
SELECT appsys.name application_system_name
,      b.name table_name
,      b.alias
,      CAST(COLLECT(dsc.txt_text) AS string_table) tab_description
FROM   dsc
,      designer.ci_application_systems appsys
With the intake of capsules the massage of Mast Mood oil is very fruitful to solve the purpose of impotence or erectile cheap tadalafil pills  dysfunction in all the men. cheapest cialis without prescription Simmering arguments, poor communication, betrayal of trust, and other barriers to intimacy can turn off your sex drive. Diabetic  discount viagra men are more prone to impotence than non-diabetic men. There are certain side effects that get cialis online  the pill holds. ,      designer.ci_app_sys_tables a
,      designer.ci_table_definitions b
WHERE  dsc.txt_ref = a.table_reference
AND    b.irid = a.table_reference
AND    a.parent_ivid = appsys.ivid
GROUP BY appsys.name 
,        b.name
,        b.alias
)
SELECT application_system_name
,      table_name
,      alias
,      sum_string_table(tab_description)
FROM   add_cast
WHERE  application_system_name = 'MY_APP_SYS'
and    table_name = 'MY_TABLE_NAME'
/

Thanks Lucas!

On another note, regular visitors may realise I’ve now got my own oramoss dot com domain and that my blogger blog is published on that domain now.

Thanks to Andreas Viklund for the template.

If anyone sees anything untoward with the new site please feel free to drop me a note. It’s a bit thin on content but I’ll work on that over time.

11g IO Calibration tool

After reading how inadequate Doug was feeling over his IO subsystem, I decided to see how quick mine was…not that we’re getting into a “mine is better than yours” game, but rather to see how mine stacks up against Doug’s, bearing in mind his is a 5 disk stripe natively attached to his machine (I’m assuming) and mine is a logical disk attached to a VMWare machine…although admittedly, the PC underneath this logical disk is running, motherboard based, RAID striping, over two physical SATA disks…I just figured it would be interesting to compare.

Obviously, any experiment that goes flawlessly according to a preconceived plan is:

1. Boring
2. Less educational
3. Not normally one I’ve done – mine always have problems it seems!

I ran the calibration on my VMWare based OpenSuse 10 linux with Oracle 11g and it immediately came up with a problem:

SQL> @io
SQL> SET SERVEROUTPUT ON
SQL> DECLARE
2    lat  INTEGER;
3    iops INTEGER;
4    mbps INTEGER;
5  BEGIN
6  -- DBMS_RESOURCE_MANAGER.CALIBRATE_IO (, , iops, mbps, lat);
7     DBMS_RESOURCE_MANAGER.CALIBRATE_IO (2, 10, iops, mbps, lat);
8
9     DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops);
10     DBMS_OUTPUT.PUT_LINE ('latency  = ' || lat);
11     dbms_output.put_line('max_mbps = ' || mbps);
12  end;
13  /
DECLARE
*
ERROR at line 1:
ORA-56708: Could not find any datafiles with asynchronous i/o capability
ORA-06512: at "SYS.DBMS_RMIN", line 453
ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 1153
ORA-06512: at line 7

Of course, consulting the manual led me to run this query:

SELECT name, asynch_io
FROM v$datafile f,v$iostat_file i
WHERE f.file#        = i.file_no
AND   filetype_name  = 'Data File'
/

which gave:
You can take Kamagra tablets either with meals or without meals anytime before the estimated sexual activity. downtownsault.org order viagra online No one wishes generic viagra online http://downtownsault.org/downtown/nightlife/ to get stuck in this situation. The second most prominent reason why men choose to suffer in silence because now you can buy sildenafil citrate you can visit our sildenafil from canada website genericpillshop.com. viagra online in canada This course we offer is one of the best courses for students.

SQL> /

NAME                                               ASYNCH_IO
-------------------------------------------------- ---------
/home/oracle/oradata/su11/system01.dbf             ASYNC_OFF
/home/oracle/oradata/su11/sysaux01.dbf             ASYNC_OFF
/home/oracle/oradata/su11/undotbs01.dbf            ASYNC_OFF
/home/oracle/oradata/su11/users01.dbf              ASYNC_OFF
/home/oracle/oradata/su11/example01.dbf            ASYNC_OFF

…or in other words no asynchronous IO available – as the error message had said.

After altering the filesystemio_options parameter to “set_all” and bouncing the instance, a second run of the calibration process seemed to work fine…

SQL> @io
SQL> SET ECHO ON
SQL> SET SERVEROUTPUT ON
SQL> DECLARE
2    lat  INTEGER;
3    iops INTEGER;
4    mbps INTEGER;
5  BEGIN
6  -- DBMS_RESOURCE_MANAGER.CALIBRATE_IO (, , iops, mbps, lat);
7     DBMS_RESOURCE_MANAGER.CALIBRATE_IO (2, 10, iops, mbps, lat);
8
9     DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops);
10     DBMS_OUTPUT.PUT_LINE ('latency  = ' || lat);
11     dbms_output.put_line('max_mbps = ' || mbps);
12  end;
13  /
max_iops = 72
latency  = 13
max_mbps = 26

PL/SQL procedure successfully completed.

So,my figures are considerably lower than those Doug achieved:

max_iops = 112
latency  = 8
max_mbps = 32

…but not too bad I guess considering the fact that mine is a VM and the hardware I’m running is more humble…no seriously, size does not matter!

11g PX tracefiles now have the tracefile identifier on them

Now that I’ve got 11g up and running on OpenSuse 10.2 on a VMWare 6 VM, I’ve had time to do some playing with the latest and greatest release and the first thing I’ve noticed, when running some of Doug’s PX test scripts, is that the trace files generated for PX slaves now have the Tracefile Identifier appended to their name, making it easier to see which OS Process (PID) was responsible for the creation of the trace file – makes things a little easier and clearer.

In 10gR2 (10.2.0.2.0 specifically) the trace files would come out with names in this format:

__.trc

e.g. fred_p001_6789.trc

In 11gR1 (11.1.0.6.0 specifically) the trace files come out with names in this format:
levitra no prescription http://www.devensec.com/sustain/indicators/2012_Sustainability_Progress_Report_Final.pdf You experience fatigue easily and have little drive for physical activities. Are you feeling difficulty in enjoying your sex life to avoid relationship levitra samples free problems and get well along with each other. on cialis line Peptides are amino acids that have been bonded together to form a small protein. Establishing care through http://www.devensec.com/news/Devens_Tick_Fact_Sheet_May_2018.pdf cialis sale a chiropractor is beneficial on many levels.
___.trc

e.g. fred_p001_5678_jeff.trc

This assumes you’ve set the tracefile identifier in the first place, otherwise that bit won’t be present. Use the following to set it, choosing whatever identifier you require of course:

alter session set tracefile_identifier='jeff';

It was interesting that the location of such files has also changed due to the implementation of Automatic Diagnostic Repository (ADR). More information on that here.

ORA-07455 and EXPLAIN PLAN…and statements which, perhaps, shouldn’t run

I encountered a scenario today which I thought was strange in a number of ways…hence, irresistible to a quick blog post.

The scenario started with an end user of my warehouse emailing me a query that was returning an error message dialog box, warning the user before they ran the query, that they had insufficient resources to run said query – ORA-07455 to be precise.

I figured, either the query is one requiring significant resources – more resources than the user has access to, or the query has a suboptimal plan, whereby it thinks it will require more resources than they have access to.

To try and determine which, I logged into the same Oracle user as my end user and tried to get an explain plan of the query – so I could perhaps gauge whether there were any problems with the choice of execution path and whether the query was one which would indeed require significant resources.

The result was that it came back with the same error – which quite surprised me at first.

In using EXPLAIN PLAN, I wasn’t asking the database to actually run the query – merely to tell me what the likely execution path was for the query and yet, it appears to still do the checks on resource usage. At first, that seemed strange to me, in the sense that I wouldn’t be requiring those resources since, I’m not actually executing the statement, yet perhaps it does makes sense – or at least is consistent, because, for example, you don’t need access to all the objects in the query if you’re not going to actually execute it, yet quite rightly, the optimizer does checks as to whether you have the appropriate access permissions to each object as part of the EXPLAIN PLAN process.

That was educational point number one for me.

After logging in as another user with unlimited resource usage, I then reran the EXPLAIN PLAN and the statement was accepted and the plan returned…indicating an unpleasant rewrite of the query, and a very high anticipated cost – in excess of the limit for that end user.

That explained why the ORA-07455 was appearing for them, but highlighted an altogether different issue which perplexed me further. There follows a simple reconstruction of the query and explain plan results:

First the obligatory test script…

 

SET TIMING OFF
DROP TABLE tab1 PURGE
/
CREATE TABLE tab1
(col1 VARCHAR2(1))
/
DROP TABLE tab2 PURGE
/
CREATE TABLE tab2
(col2 VARCHAR2(1))
/
BEGIN
DBMS_STATS.GATHER_TABLE_STATS(ownname => USER
                            ,tabname => 'TAB1'
                            );
DBMS_STATS.GATHER_TABLE_STATS(ownname => USER
                            ,tabname => 'TAB2'
                            );
END;
/
INSERT INTO tab1 VALUES('A')
/
INSERT INTO tab1 VALUES('B')
/
INSERT INTO tab1 VALUES('A')
/
INSERT INTO tab1 VALUES('B')
/
INSERT INTO tab2 VALUES('C')
/
INSERT INTO tab2 VALUES('D')
/
COMMIT
/
SET AUTOTRACE ON
SELECT *
FROM tab1
WHERE col1 IN (SELECT col1 FROM tab2)
/
SET AUTOTRACE OFF

 

Now the results…

 

Table dropped.

Connected.

Table dropped.


Table created.


Table dropped.


Table created.


PL/SQL procedure successfully completed.


1 row created.

These tablets primarily work in the same commander levitra  in a better manner. Sildenafil citrate also helps with the production of cGMP enzymes which lead for the order cheap levitra devensec.com effective promotion of the blood vessels and capillaries cuts down the blood supply towards the male genital organ. These medications amplify that signal, allowing men to function naturally. look these up levitra on line This therapy is tailed with some common side effects of testosterone and anabolic 5mg cialis tablets  steroids are: increased blood pressure, increased cholesterol, acne, hair loss, structural changes in the bile duct, pancreas, and sphincter of Oddi. 
1 row created.


1 row created.


1 row created.


1 row created.


1 row created.


Commit complete.


C
-
A
B
A
B

4 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 4220095845

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time
|
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     1 |     2 |     4   (0)|00:00:01  |
|*  1 |  FILTER             |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL | TAB1 |     1 |     2 |     2   (0)|00:00:01  |
|*  3 |   FILTER            |      |       |       |            |          |
|   4 |    TABLE ACCESS FULL| TAB2 |     1 |       |     2   (0)|00:00:01  |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter( EXISTS (SELECT /*+ */ 0 FROM "TAB2" "TAB2" WHERE
           :B1=:B2))
3 - filter(:B1=:B2)


Statistics
----------------------------------------------------------
       1  recursive calls
       0  db block gets
      18  consistent gets
       0  physical reads
       0  redo size
     458  bytes sent via SQL*Net to client
     381  bytes received via SQL*Net from client
       2  SQL*Net roundtrips to/from client
       0  sorts (memory)
       0  sorts (disk)
       4  rows processed

 

Now, when I first saw the query I thought, hang on a minute, COL1 does not exist in table TAB2 so this query should not even execute…but it does! I don’t think it should execute personally but according to the documentation, “Oracle resolves unqualified columns in the subquery by looking in the tables named in the subquery and then in the tables named in the parent statement.“, so it is operating as described in the manuals- even if, in my view, it’s a little odd since without a rewrite, the query is incapable of executing.

The query has been rewritten with an EXISTS approach – note the first FILTER statement in the “Predicate Information” section of the autotrace output. A bit like this:

 

SELECT a.*
FROM   tab1 a
WHERE EXISTS (SELECT 0
           FROM   tab2 b
           WHERE  a.col1 = a.col1
          )
/

 

The subquery is always going to return a row, hence, for any row we select in the containing query, we will always get that row back, because the EXISTS will always find a match – it’s a bit like saying “WHERE TRUE” I guess.

Interestingly, my friend Jon first brought this scenario to my attention last week in various discussions with him and another of my colleagues, who is far more experienced than myself. To be fair, the experienced colleague is the source of a number of my blogging posts, but he’s painfully shy and will therefore remain nameless.

I was educated during that discussion, that this functionality is as advertised in the manuals – even if it doesn’t sit well with me. My closing line to my fellow debaters at the time, was that nobody would ever write SQL like that and if they did I’d tell them to rewrite it using aliases and so that it made sense – as is often the case in life though, the very next week, a real life user comes up with exactly that scenario – at least I was prepared!

DBA_SEGMENTS misleading PARTITION_NAME column

Whilst writing some code that referenced the DBA_SEGMENTS dictionary view today, I realised that the contents of the PARTITION_NAME column actually contains the name of the subpartition when the table is a subpartitioned table…a script and results to illustrate:

drop table jeff_unpartitioned purge
/
drop table jeff_partitioned purge
/
drop table jeff_subpartitioned purge
/
create table jeff_unpartitioned(col1 number,col2 number)
/
create table jeff_partitioned(col1 number,col2 number)
partition by range(col1)
(partition p1 values less than(100)
,partition p2 values less than(maxvalue)
)
/
create table jeff_subpartitioned(col1 number,col2 number)
partition by range(col1)
subpartition by list(col2)
subpartition template
(subpartition sub1 values(1)
,subpartition sub2 values(2)
)
(partition p1 values less than(100)
,partition p2 values less than(maxvalue)
)
/
select segment_name,partition_name,segment_type
from   user_segments
where  segment_name like 'JEFF%'
order by segment_name
/

 

…gives the following results…

Table dropped.

Table dropped.


They're too busy winning Pulitzer Prizes to be bothered of, as they are buy viagra australia  not serious. Motor vehicle accident cialis brand  along with sports incidents tend to be the most prevalent cause of whiplash. Keep  viagra prescription australia the parent informed about the development of the child. Simply looking at an advertisement in buy cialis levitra  a newspaper or Yellow Pages won't get you the best erectile dysfunction pill at lower prices. Table dropped.


Table created.


Table created.


Table created.


                        Partition
SEGMENT_NAME             Name                 SEGMENT_TYPE
------------------------ ------------------
JEFF_PARTITIONED         P2                   TABLE PARTITION
JEFF_PARTITIONED         P1                   TABLE PARTITION
JEFF_SUBPARTITIONED      P1_SUB1              TABLE SUBPARTITION
JEFF_SUBPARTITIONED      P2_SUB2              TABLE SUBPARTITION
JEFF_SUBPARTITIONED      P1_SUB2              TABLE SUBPARTITION
JEFF_SUBPARTITIONED      P2_SUB1              TABLE SUBPARTITION
JEFF_UNPARTITIONED                            TABLE

7 rows selected.

 

As you can see, the subpartitioned table shows the subpartition name in the PARTITION_NAME column. It’s not a big deal and I only noticed because I assumed there would be a SUBPARTITION_NAME column whilst I was writing my code…the failure in compilation led me to track this slightly erroneous situation down.

 

Why does this occur?

 

Well, if you delve into the code behind DBA_SEGMENTS you’ll see it refers to another view in the SYS schema called SYS_DBA_SEGS. The SQL behind SYS_DBA_SEGS selects all levels (Table, Partition and subpartitions) from the SYS.OBJ$ view, but then “loses” the partitions when joining to SYS.SYS_OBJECTS (the OBJ# from SYS.OBJ$ does not map to any row in SYS.SYS_OBJECTS via the OBJECT_ID column). The SQL then “loses” the table when joining to SYS.SEG$ – exactly why it does this I don’t fully understand, but I’m guessing it’s because those components of the composite object don’t actually have their own segment to physically store anything in since there are lower levels – in this case the subpartitions.

 

In any event, it’s a little bit of a gotcha and the column could probably do with being renamed to SUB_SEGMENT_NAME perhaps.

Partition exchange loading and ORA-14097

Continuing the theme of this post by Howard, I came across a scenario today which was resulting in this error:

ORA-14097 - column type or size mismatch in ALTER TABLE EXCHANGE PARTITION

I ran my checker script to try and identify exactly what the mismatch was but it came back with nothing. My script, whilst useful, isn’t perfect – and indeed there was an error in it (fixed now) which led to it not identifying the problem for this scenario – but given that it couldn’t find the problem, I had to manually look at all the attributes across the tables to try and identify a difference.

For a long time I was left perplexed because my script was suggesting that everything was OK and my script checks quite a few things now – what I wasn’t taking into account was that the script was wrong and that one of the things it was supposedly checking for, it was in fact overlooking.

In the end I found that a number of columns on the source table were NULLable whilst they were NOT NULL on the target. My script was supposed to be checking for this – which is why I was struggling to fix the problem for so long. After matching the nullability on both tables in the partition, the exchange ran through fine…but I guess my point would be, that the error message above doesn’t really convey the message that the problem might be a mismatch in the optionality of a column or columns on the tables involved.

Being a fan of fictional detective, Sherlock Holmes, I should have considered his position that “It is an old maxim of mine that when you have excluded the impossible, whatever remains, however improbable, must be the truth”…even if that truth is that your own script is at fault!

In attempting to investigate the problem I knocked up a simple script to demonstrate the problem and then the fix:

conn alpha/alpha@j4134

drop table source purge
/

create table source(pk_col1 number not null
,other_col1 number
,other_col2 date
,other_col3 varchar2(20)
)
/

create unique index test_pki on source(pk_col1)
/

alter table source add constraint test_pk primary key(pk_col1) using index
/

grant select on source to beta
/

insert into source(pk_col1,other_col1,other_col2,other_col3)
select l
, l
, trunc(sysdate+l)
, 'XXX'to_char(l)
from (select level l from dual connect by level < 1000)
/
commit
/

conn beta/beta@j4134

drop table target purge
/

create table target(pk_col1 number not null
,other_col1 number not null
,other_col2 date
,other_col3 varchar2(20)
)
partition by range(pk_col1)
(partition p1 values less than(1000)
,partition p2 values less than(2000)
)
/

create unique index test_pki on target(pk_col1) local
/

alter table target add constraint test_pk primary key(pk_col1) using index
/

Attention viagra uk sales Learn More Here Deficit Disorder (ADD, ADHD) has become a serious issue for men in the last 20 years and recover its investment. This action allows more amount of blood they can hold. order levitra on line For further information regarding precautions, do some research online and read some reviews, pickup a magazine and read up the usage instructions. cialis wholesale india buying here Getting distracted from mental peace- Today, when life has become hectic and stressed mindfulness medication should be levitra sale  included as must-do activity. alter table target exchange partition p1 with table alpha.source
/

conn alpha/alpha@j4134

alter table source modify(other_col1 number not null)
/

conn beta/beta@j4134

alter table target exchange partition p1 with table alpha.source
/

…and the results…

Connected.

Table dropped.


Table created.


Index created.


Table altered.


Grant succeeded.


999 rows created.


Commit complete.

Connected.

Table dropped.


Table created.


Index created.


Table altered.

alter table target exchange partition p1 with table alpha.source
*
ERROR at line 1:
ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION


Connected.

Table altered.

Connected.

Table altered.

Whilst in the example Howard gave, I think the issue revolved around the use of the word “shape” in error ORA-42016 and whether shape includes “data type” or not, error ORA-14097 seems to revolve around whether nullability is included under the phrase “column type” – I think both errors could do with being either slightly reworded or perhaps split out into separate errors which are more indicative of the true problem or problems at hand.