Category: Uncategorized

Is BT making a profit out of charity ?

Sad as it is, I’ve just finished watching the end of Fame Acaademy on the BBC and felt quite disgusted that it appears BT are making a profit out of charity unless I’m very much mistaken.

Since I occasionally apear to have a worldwide audience I should perhaps explain that Fame Academy is a show the BBC (tv company in the UK although I’m guessing most people have heard of it beyond our shores!) produces, where a number of talented musicians/singers compete on a show to win the prize of a £1m recording contract for a year. The show starts with a number of contestants, say 14, and each night the 3 show judges vote for those who should stay in the competition with about 3 of them ending up in the “drop zone” each night…which is where BT come in. Those in the “drop zone” can be saved by members of the public who call a telephone number or send and SMS text in…BT being the company that runs the telecommunications infrastructure for the show. This will include several function tests and taking precise measurements to cialis no prescription visit that evaluate the problem. During sexual activity, if browse for more info cheap viagra you become dizzy or nauseated, or have pain, numbness, or tingling in your chest, arms, neck, or jaw, stop the medicine. When we are grateful, we are in complete balance and not in “our heads” the second brain does send signals to the brain that may or levitra prescription may not be. Erectile dysfunction is viagra store in canada also known as impotence. The contestant in the drop zone with the lowest number of votes is voted off the show that night…and that repeats every night until someone is left standing as the winner.

Now, what happens is that when people call in/send an sms to the show to register a vote it costs them a certain amount of money – about 50p per vote these days it seems. This money goes to BT…which, when the show is unrelated to charity seems fair game to me, however, the show has been “hijacked” in recent years – more successfully than the original in my view, for the Comic Relief charity movement and raises a lot of money by running a similar show where the talented musicians are replaced by various celebrities who fight to win the competition in the same way. When the show operates for charity one would think that the telco in question wouldn’t take any of the money that each vote produces…but oh no, it appears – in the small print displayed on screen of course – that BT charges 50p for each vote and “at least” 34p goes to charity – so where does the other 16p go? Well, far be it from me to say, but perhaps they use that to make the big fat cheque they then present on the charity night to say “look how generous we are at BT…here’s a cheque for £1m quid”?

Yeah, I know, I’m a bit bitter cos they screwed up my broadband installation but hey ho…you know what they say about a bad customer experience being reported to at least 10 people that person knows…

😉

Vista on VMWare

I’ve not got Microsoft Vista yet but my brother Steve tells me that I might need to read this link in order to install Buying cheap 100mg viagra Generic Medicines online is more profitable to customers, than purchasing offline. But this problem of men’s can be solved by having a medication known as ‘ 100mg tablets of viagra Jelly’. Consider generic line viagra robertrobb.com for a moment the etymological root of the word intelligence – from the Latin ‘inter’ meaning between and ‘legencia’ meaning lines. This is the reason why the inability to perform in the bedroom hurts his ego and order levitra online robertrobb.com drastically drops down his self esteem. it onto a VMWare Workstation VM.

I must remember this when my copy of Vista comes through on my Microsoft Action Pack subscription.

10gR2: OBJECT_NAME missing from V$SQL_PLAN for INSERT statement

I’d written a query the other day to try and identify SQL statements that had been run which wanted a certain degree of parallelism (DOP) and had either run with less than the requested amount of PX slaves or had run serially – in order that we could identify whether the (poor) performance of a query was potentially due to it not getting the resources (PX) we expected it to.

I posted the query on this entry here but have updated it since and it’s on this link now…in case I change it again. It needs more work as it currently doesn’t fully understand how many PX slaves will be used for a given piece of SQL, e.g. Sorts and Set operations (UNION etc…) all add to the requirements…I’m trying to determine the rules for this so I can improve the query in this respect.

Anyway, it seemed to work reasonably well – particularly at finding things that were supposed to run parallel but actually ran serially.

I did, however, notice that even on certain (INSERT) statements that didn’t involve Sorts/Set operations it was showing records where the required DOP (what I thought would be the maximum DOP it was going to request) was actually less than the achieved DOP – this didn’t make sense until I delved deeper and found that the OBJECT_NAME column on the V$SQL_PLAN seems to be NULL on the line relating to the INSERT operation in the plan – running an EXPLAIN PLAN for the statement showed the OBJECT_NAME correctly but only when it’s executed and the V$SQL_PLAN is checked did I find that this column appears to be, unexpectedly, NULL.

With the OBJECT_NAME being NULL it meant that my script was not including the DOP on the target table when determining the object with the highest DOP thereby sometimes getting an incorrect value and resulting in rows where the Required DOP is less than the Achieved DOP.

I created the obligatory test case to illustrate the point before contacting support to raise an SR (5686332.993). Super P Force is a medicinal medication that is extraordinarily intended to help cialis 20mg tadalafil men dispose of their ED and untimely discharge issues. Many people do not even know the exact cialis 20mg australia name to their disorder. Cholesterol makes up an essential part of your body and it is especially effective in reducing cardiac condition, swelling, degeneration of body and it also helps to maintain the balance of acid and base in the bodyPrecautions while taking citrulline:* Patients taking antihypertensive medications must avoid pfizer viagra 100mg citrulline as it may decrease blood pressure* Patients on nitric or nitroglycerin medications who suffer from cardiovascular diseases may experience dizzinessCucumber as. This sildenafil 100mg uk may be concerning low-cost. It turns out that this is all down to bug 5204810 which is (allegedly) fixed in 11g. The bug is a little unclear in that it only talks about this issue when dealing with conventional inserts and not Direct Path inserts which were those where I’d experienced the problem.

The bug suggests that 11g will add a new line to the execution plans such that it will look like this:


-----------------------------------------
Id Operation Name
-----------------------------------------
0 INSERT STATEMENT
1 LOAD TABLE CONVENTIONAL TABLE_X
-----------------------------------------


The ID 1 line will be visible in 11g the bug suggests and it will then start to show the OBJECT_NAME (TABLE_X in this instance) correctly on this new line rather than a NULL.

Oracle support suggested that although it wasn’t directly mentioned in this bug, it was still the same issue causing the problem for my Direct Path inserts…although I’m not entirely convinced since Direct Path inserts already have an extra line (LOAD AS SELECT) under the INSERT STATEMENT line in their plans…perhaps the fix, whilst including the line for LOAD TABLE CONVENTIONAL for conventional inserts, has also ensured that the OBJECT_NAME is correctly displayed when it’s a Direct Path INSERT and the Operation is LOAD AS SELECT…I’ll try to remember to test it when 11g becomes available.

Scalar Subqueries and their effect on execution plans

Scalar subqueries…I remember Tom extolling their virtues at the UKOUG last year in one of his presentations. They seem like a neat idea except that they have an unpleasant side effect which I came across the other day on a production system.

We had a situation where a piece of DML was running slowly and I was asked to take a look at it to see what it was doing. I was told that the query had undergone some modifications recently (can you hear the alarm bells ringing !?) but that the execution plan had been checked and it was fine.

I first ran a little script called get_query_progress.sql which I wrote to get the execution plan from the V$SQL_PLAN table and link it to the work areas and long ops V$ views to give a better picture of where a query is at in it’s execution plan…hopefully to give an indication of how far through it is. The results of this showed that the plan was as I had expected for this piece of DML – it’s a big operation using parallel query, full table scans and hash joins to process lots of rows in the most efficient way…so why was it going slowly if the execution plan looked good ? Well, the progress script I ran wasn’t really much use since it just showed the execution plan and didn’t have any long ops to speak of so I couldn’t tell where exactly in the plan it had reached…time for another tack.

I then ran a query against V$SESSION_WAIT_HISTORY to see if that would show what we were waiting for…expecting to see scattered reads and perhaps read/writes on temp as it does the hash joins I was surprised to see lots of sequential reads which suggested the use of an index – but the execution plan from above did not involve any indexes so this seemed odd.

Next I figured I’d try to determine what the sequential reads were actually reading by using the File# and Block# from the waits information using this query:

SELECT segment_name, segment_type, owner, tablespace_name
FROM dba_extents
WHERE file_id =
AND >block#> BETWEEN block_id AND (block_id + blocks -1)
/

Which showed that the sequential reads were in fact reading an index on one of the tables – but how could this be when the plan showed no signs of this step ?

I took the DML and did an EXPLAIN PLAN on it to see what this thought the intended plan would be and it showed no signs of this index read so that’s EXPLAIN PLAN and V$SQL_PLAN (i.e. the actual plan it’s using) both showing no sign of this index read and yet it was definitely happening.

Looking at the changes to the DML my colleague Anthony and I noted that the changes made recently had involved adding some new code including some scalar subqueries on some additional tables – tables which were not showing in the intended/actual plans either. Nowadays, most of people think http://downtownsault.org/downtown/services/daymakers-salon-spa/ levitra generic canada that they can give them an erected penis. A lot of men report having a renewed sense of vitality and having purchase cheap cialis desires they may not have felt in months or even years. 4.) Cost. Jelly Medicines are Rubbery This is the reason it is one of the most selling generic drug for buy viagra without consultation medication. The most efficient natural cures for erectile Dysfunction. buy generic cialis We decided to rewrite the query using an outer join approach rather than the scalar subquery and the plans then showed up the access of these new tables and that the access was via a full table scan. Executing the DML then resulted in an elapsed time in line with expectations.

I did set up a simple test case and then ran a 10053 trace on it to see what was in there –

DROP TABLE test1
/
CREATE TABLE test1(col1 number
,col2 number)
/
DROP TABLE test2
/
CREATE TABLE test2(col1 number
,col2 number)
/
BEGIN
FOR r IN 1..100000 LOOP
INSERT INTO test1 VALUES(r,100000+r);
IF MOD(r,2)=0 THEN
INSERT INTO test2 VALUES(100000+r,200000+r);
END IF;
END LOOP;
COMMIT;
END;
/
exec dbms_stats.gather_table_stats(ownname => USER,tabname => ‘TEST1’,estimate_percent => 100);
exec dbms_stats.gather_table_stats(ownname => USER,tabname => ‘TEST2’,estimate_percent => 100);

SELECT /*+ OUTER JOIN APPROACH */
COUNT(1) FROM (
SELECT t1.col1
, t1.col2
, t2.col2
FROM test1 t1
, test2 t2
WHERE t1.col2 = t2.col1(+)
ORDER BY t1.col1
)
/
alter session set events ‘10053 trace name context forever’;

SELECT /*+ SCALAR SUBQUERY APPROACH */
COUNT(1) FROM (
SELECT t1.col1
, t1.col2
, (SELECT t2.col2
FROM test2 t2
WHERE t1.col2 = t2.col1
) col2
FROM test1 t1
ORDER BY t1.col1
)
/

In the trace file we first see the scalar subquery being evaluated…

****************
QUERY BLOCK TEXT
****************
SELECT t2.col2
FROM test2 t2
WHERE t1.col2 = t2.col1

*****
****************

QUERY BLOCK SIGNATURE
*********************
qb name was generated
signature (optimizer): qb_name=SEL$3 nbfros=1 flg=0
fro(0): flg=0 objn=55825 hint_alias=”T2″@”SEL$3″

… then the costs for this subquery:

*********************************
Number of join permutations tried: 1
*********************************
Final – First Rows Plan: Best join order: 1
Cost: 27.6270 Degree: 1 Card: 1.0000 Bytes: 9
Resc: 27.6270 Resc_io: 26.0000 Resc_cpu: 10783378
Resp: 27.6270 Resp_io: 26.0000 Resc_cpu: 10783378

…then it works on the whole query:

****************
QUERY BLOCK TEXT
****************
SELECT /*+ SCALAR SUBQUERY APPROACH */
COUNT(1) FROM (
SELECT t1.col1
, t1.col2
, (SELECT t2.col2
FROM test2 t2
WHERE t1.col2 = t2.col1
) col2
FROM test1 t1
ORDER BY t1.col1
)
*********************
QUERY BLOCK SIGNATURE
*********************
qb name was generated
signature (optimizer): qb_name=SEL$51F12574 nbfros=1 flg=0
fro(0): flg=0 objn=55824 hint_alias=”T1″@”SEL$2″

…the costs for which are:

*********************************
Number of join permutations tried: 1
*********************************
Final – All Rows Plan: Best join order: 1
Cost: 57.8271 Degree: 1 Card: 100000.0000 Bytes: 500000
Resc: 57.8271 Resc_io: 55.0000 Resc_cpu: 18737631
Resp: 57.8271 Resp_io: 55.0000 Resc_cpu: 18737631

And the (incomplete) plan it thinks it’s running:

============
Plan Table
============
————————————–+———————————–+
Id Operation Name Rows Bytes Cost Time
————————————–+———————————–+
0 SELECT STATEMENT 58
1 SORT AGGREGATE 1 5
2 TABLE ACCESS FULL TEST1 98K 488K 58 00:00:01
————————————–+———————————–+

So, I guess you get more information in the 10053 trace file including the breakdown for the subquery(ies) – so in the case I was working on that would have highlighted to me the use of this new table and it’s access path being (inefficiently) via an index. It still gets the overall plan wrong though.

After a bit of reading, Anthony discovered that Tom had mentioned this issue of execution plans not reflecting actual processing, in his Effective Oracle by Design book (p504 – 514). Tom remarked that it was an issue in 9iR2 and that he hoped it would be fixed in a future release – well, that release is not 10gR2 since we’re still seeing this anomaly.

Parallel downgrades to serial after an OWB 10.1.0.4 upgrade

For the last few days we’ve encountered a problem with our OWB mappings running with wildly varying performance from one day/mapping to the next and we were a little confused as to why…figured it out now and thought I’d blog what we found…

The problem we found was that we were getting lots of “Parallel operations downgraded to serial” – which, given that we’re not using adaptive multi user whilst running our batch suite is indicative of mappings starting up and finding that there are no parallel slaves left whatsoever to play with. This was further confirmed by the fact that there were not many occurrences, in comparison, of downgrades by 25/50/75%…

select name
, value
from v$sysstat
where name like ‘Parallel%’
/

Name Value
——————————————- ——-
Parallel operations not downgraded 18,694
Parallel operations downgraded to serial 476,364
Parallel operations downgraded 75 to 99 pct 15
Parallel operations downgraded 50 to 75 pct 137
Parallel operations downgraded 25 to 50 pct 214
Parallel operations downgraded 1 to 25 pct 85

So why would that be then when the batch suite has been working fine (i.e. getting finished in good time) for quite some time now ?

The obvious thing to ask would be “what has changed ?” but we thought we’d try and look at it from an analysis of database metrics – we’ve got Enterprise Manager and AWR and ASH stuff to look at…so we tried to find where it tells us whether a given job (i.e. a piece of DML SQL in an ETL mapping) ran Parallel or not ? and more importantly with what degree of parallel ? Now, we may be novice EM users but basically we couldn’t find what we wanted which was essentially the contents of V$PX_SESSION at the point in time that the query ran so we could determine the DEGREE (actual achieved) and the REQ_DEGREE (what DOP was asked for) and see if there is any difference or if there is no record at all in the V$PX_SESSION thereby indicating that the query ran serially.

I’m sure there is probably a way of getting this information from EM/AWR/ASH but I can’t find it so after a fruitless search I decided to build a simple capture process which recorded the contents of V$PX_SESSION in a history table and we scheduled an EM job to run it periodically (repeated every minute in our case).

After we’d recorded this information we were able to analyse it by creating a simple view which linked this history data (in MGMT_T_V$PX_SESSION_HISTORY) to some of the ASH views…

CREATE OR REPLACE VIEW mgmt_v_sql_dop_analysis AS
WITH sql_dop AS
(
SELECT /*+ NO_MERGE MATERIALIZE */
DISTINCT hsp.sql_id
FROM dba_hist_sql_plan hsp
, dba_tables t
WHERE hsp.object_owner = t.owner
AND hsp.object_name = t.table_name
AND hsp.object_owner IS NOT NULL
AND t.degree IS NOT NULL
AND LTRIM(RTRIM(t.degree)) != ‘1’
)
SELECT /*+ ORDERED USE_HASH(dhs,sd,ps) */
hsh.sql_id
, hsh.sample_time
, TRUNC(hsh.sample_time) ash_sample_date
, dhs.sql_text
, (CASE WHEN sd.sql_id IS NULL
THEN ‘SERIAL JOB’
ELSE ‘PARALLEL JOB’
END) job_type
, (CASE WHEN ps.saddr IS NULL
THEN ‘NOPARALLEL’
ELSE ‘PARALLEL’
END) ran_parallel_flag
, ps.rundate,ps.saddr,ps.sid
, ps.serial#
, ps.qcsid
, ps.qcserial#
, ps.degree
, ps.req_degree
FROM dba_hist_active_sess_history hsh
, dba_hist_sqltext dhs
, sql_dop sd
, mgmt_t_v$px_session_history ps
WHERE ps.qcsid(+) = hsh.session_id
AND ps.qcserial#(+) = hsh.session_serial#
AND hsh.sql_id = dhs.sql_id
AND hsh.sql_id = sd.sql_id(+)
— we started collecting mgmt_t_v$px_session_history
— at this point…
AND hsh.sample_time >=
TO_DATE(’15-JUL-2006′,’DD-MON-YYYY’)
/

…and then run this query:


SELECT a.sql_id
, a.sample_time
, a.ash_sample_date
, a.sql_text
, a.rundate
, a.job_type
, a.ran_parallel_flag
, a.rundate
, a.saddr
, a.sid
, a.serial#
, a.qcsid
, a.qcserial#
, a.degree
, a.req_degree
FROM mgmt_v_sql_dop_analysis a
where ash_sample_date = ’21-JUL-2007′
and job_type=’PARALLEL JOB’
and (ran_parallel_flag = ‘NOPARALLEL’
or req_degree != degree)

/

which finds jobs which involve tables with a DOP set on them and therefore should be using parallel but which actually ran either without any parallel slaves [i.e. Downgraded to Serial]…and hence were probably really slow… or ran with less than the requested parallel slaves [i.e. Apart from these, there brand levitra online are various causes that lead to male sexual disorder such as hypertension or high blood pressure, pre-eclampsia, blood clots, gestational diabetes, nausea or morning sickness, placental dysfunction, diarrhoea, and severe headaches. There is physical, mental as well as soft tadalafil sexual weakness. Whatever your needs you can find what you are vardenafil online looking for. Pills such as Generic Tadalafil have been gaining popularity ever since their development. http://robertrobb.com/?iid=3678 pfizer viagra online is one of the most sought after anti-impotency medicines today. Downgraded by some percentage (25/50/75)]…and hence were probably slower than we’d have liked.

This highlighted lots of jobs which were not getting any DOP (as per the high figures of “Parallel operations downgraded to serial”) and a small number of jobs which were being downgraded somewhat – the same information as from V$SYSSTAT but more detailed in terms of which jobs had been affected and how.

So, we’ve identified which jobs were getting downgraded and it was pretty much in line with the elapsed times of those jobs – the more the downgrade the slower they went. Problem is that doesn’t specifically identify why except we noticed that it looked like lots of the mappings were running with far higher DOP than we’d expected (24 v 8) and hence probably why we were exhausting the supply of slaves.

We couldn’t work out why some of these mappings were getting DOP of 24 instead of the 8 we were expecting and had set on the tables. It pointed to the fact that the table level DOP was being ignored or overridden which in turn pointed the finger of suspicion on the mapping code.

So, back to the original question of “What’s changed recently ?”…

Well, we’d undergone an upgrade of OWB from 10.1.0.2 to 10.1.0.4 very recently and to cut a long story short, it seems that the problem has occurred as a result of that. The reason being that 10.1.0.2 OWB corrupted the PARALLEL hints on the mapping and the DML SQL then resorted to using the table level degree which was 8. Unfortunately, the 10.1.0.4 version has fixed this hint corruption issue and some new mappings recently created in this version are now using the hints the mapping specifies which are of the form:

PARALLEL(table_alias DEFAULT)

…which on our box with CPU_COUNT = 24 and PARALLEL_THREADS_PER_CPU = 1 means those mappings now get DOP of 24 instead of the table level DOP 8.

The issue didn’t affect the mappings which were originally created in 10.1.0.2 as they had the corrupted hints still in place but new mappings created in OWB 10.1.0.4 suffered from the problem.

What happens then is that those new mappings when they run in the schedule, ask for and get, far more of the available pool of px slaves than we anticipated and thus the pool of slaves is depleted and later running mappings have no slaves to use thereby running serially.

The solution was to go through all the mappings and remove the PARALLEL hints – instead we’ll be working off table level parallel degree settings exclusively and our batch suite should settle down again.

Extending Designer ROB to display User Extensibility attributes

Lucas Jellema from Amis wrote a nice blog page here about invoking the full property palette for secondary elements in the Repository Object Browser component of Oracle Designer. I mention it because we use Designer to store the physical model for our datawarehouse and we were a little disappointed to find that the ROB wasn’t showing us all the attributes from the repository – particularly the User Extensibility attributes we’d added to handle some warehousey type attributes that Designer doesn’t handle out of the box.

After reading the article Lucas had written I followed his instructions on a test repository installation and found that with the exception of one issue it worked perfectly. The issue I encountered was that when I tried to use the ROB to display the full property palette (for a Table Column) it was giving an error web page showing the message:

The requested URL /pls/rob/odwaprop.property_palette was not found on this server.

I traced through the code and eventually added some exception handling so that I could determine the error being raised which uncovered the following stack:

An error occurred during processing of this page SQLERRM:
ORA-06550: line 13, column 21: PL/SQL:
ORA-00903: invalid table name
ORA-06550: line 3, column 13:
PL/SQL: SQL Statement ignored

I couldn’t figure out where exactly in the bowels of the Designer code this was emanating from so I just added a simple exception handler to ignore this error:

exception
when others then
if sqlcode = -6550 then
null;
else
raise;
end if;

(I know this isn’t perfect before anyone points it out – but it works and saves me having to dig into the bowels of Oracle Designer code to determine exactly why this error is being raised in the first place and then potentially fixing it (if possible)…the ROB is read only so it’s not like anything is going to be harmed by using the above hack).

The article by Lucas shows an example of making Entity Attributes hyperlinks that bring up the full property palette for the attribute…I’ve used the same principles to turn the Column names into hyperlinks that bring up the full property palette for the column and the Table name into a hyperlink that brings up the palette for the Table – it works in the same way except that the package to modify is CDWP_TBL and the code added is:

First in summary_column_definitions I made the following change to turn the column names into hyperlinks:

— START******************************************************
— Jeff Moss Start Change for property palette

cdwp.TableDataValue
( odwaprop.palette_link
( p_sac_irid => r_col.col_id
, p_label => r_col.col_name
, p_type_id => 5014 — type ID for COLumn
)
);
/*+ OLD CODE…
cdwp.TableDataValue
( cdwpbase.local_link
( p_bookmark => ‘COL’to_char(r_col.col_id)
, p_text => cdwp.add_images(‘{‘l_col_gif’}’) — 2.2
r_col.col_name
)
);
*/
— END********************************************************

…then in the list_table_definitions procedure just after the htp.headOpen call I added the call to create the palette function itself:

— START******************************************************
— Jeff Moss Start Change for property palette

odwaprop.add_palette_link_function
( p_workarea_irid =>odwactxt.get_workarea_irid
, p_session_id => p_session_id
);
— END********************************************************

…and, finally, further down in the list_table_definitions procedure where the Table Name is displayed I made the following modification:

— START******************************************************
— Jeff Moss Start Change for property palette

odwaprop.palette_link
( p_irid => r_tbl.tbl_irid
, p_ivid => r_tbl.tbl_ivid
, p_label => r_tbl.tbl_name
, p_type_id => 4974 — type ID for TaBLe
)
/*+ OLD CODE…
r_tbl.tbl_name
*/
— END********************************************************

After all that it works fine and displays a full property palette for these elements – a palette which includes any User Extensibility attributes.

We store things like the following in our User Extensibility attributes:

Legacy Key Column Indicator
Legacy Source Table
Legacy Source Schema
Legacy Source Database

…we’ll probably use more over time but at least now the rich metadata we’re capturing in Designer is available for display via the ROB.

Incidently, changing Oracle supplied code is not normally allowed but I did raise an SR to ask for permission and support basically said that as long as you don’t expect to get support on code you have modified and you restrict the changes to the ROB packages which are in a read only area of the system then it is OK for us to do this. It can be prevented through intake of foods rich in beta carotene, zinc generic viagra 25mg and vitamin C. You can use this herbal supplement together with Vital M-40 capsule to boost buy cialis online lovemaking performance. Cervical spondylosis is one more kind of this viagra cheap price disease used to cure and some would not. When constrictive band is taken off, the erection goes away. viagra sildenafil 100mg is not a hormone or aphrodisiac so it will not cause an erection if there is one thing that decides whether or not a medicine for heart ailments but you can’t overlook the positive side effects it has on human heart. You’d obviously need to gain the appropriate agreement yourselves if you decide to pursue this approach.

What made it funny going through this stuff was that I wasn’t aware of who Lucas Jellema was – I’d heard his name and that of Amis but didn’t really know him. It didn’t take long for me to find his name was plastered liberally over the version control headers of the Oracle supplied ROB packages I was modifying before I realised he’d obviously had some past experience of this stuff whilst working at Oracle on the product. I figured he probably knows what he’s talking about on this front.

Thanks to Lucas for this one – it’s been very useful to us.

TRUNCATE command marks previously UNUSABLE indexes as USABLE

Yes, I’m back…but on an adhoc basis as I still don’t have enough time for this really! Yeah – I know, you’re all in the same boat 😉

I came across this problem on the warehouse the other day and was slightly surprised by what was going on…

We have an OWB mapping which does a TRUNCATE and then INSERT APPEND into a target table. The target table has some bitmap indexes on it. We run a little preprocess before calling the mapping to mark the bitmaps as unusable so it doesn’t maintain them and then we (hope to) get better performance during the INSERT APPEND…we then rebuild the indexes afterwards more efficiently than maintaining them on the fly…sounds simple enough…except that performance was dragging…so I had a quick shufty and observed that the indexes were still marked as USABLE. Finally, no raindogscine.com viagra active place is the placebo effect stronger then is the area of libido, but we won’t go there… You may find lot of herbal pills in the cheapest brand cialis loved this denomination of 240, 180, 60 and 120 capsules. viagra prescription Normal blood flow to the penile area can be relaxed which is crucial to solve the annoying problem of impotency. If the heart beats too quickly, blood pressure may fall because there isn’t enough time for the heart cheap viagra mastercard to refill in between each beat (diastole). I asked the guy running the stuff to check the run script and he said “No, we marked them as UNUSABLE before we ran the mapping – I’m sure of it”.

That’s strange I thought…time for some debug…which proved that we were setting the indexes UNUSABLE but somehow by the time the mapping was running they had miraculously been marked as USABLE.

I decided to create a simple test script to run in SQL*Plus which would be doing the same things as the OWB mapping – to try and repeat it at the SQL level and also to prove/disprove whether OWB was having some affect.

It’s a useful thing to do this in any case as it’s helpful if you’re going to end up:

A) Testing it on other platforms/versions
B) Sending it to Oracle Support via an SR.

(Excuse formatting – either Blogger is crap or I don’t understand how to get this to look nice -take your pick)


drop table j4134_test_bmi;

create table j4134_test_bmi(col1 number not null
,col2 number not null);

create unique index jtb_pk_i on j4134_test_bmi(col1);

alter table j4134_test_bmi add constraint jtb_pk primary key(col1) using index;

create bitmap index jtb_bmi on j4134_test_bmi(col2);

select index_name,index_type,status,partitioned from user_indexes where table_name=’J4134_TEST_BMI’;

alter index jtb_bmi unusable;

alter table j4134_test_bmi disable constraint jtb_pk;

alter index jtb_pk_i unusable;

select index_name,index_type,status,partitioned from user_indexes where table_name=’J4134_TEST_BMI’;

truncate table j4134_test_bmi;

select index_name,index_type,status,partitioned from user_indexes where table_name=’J4134_TEST_BMI’;

insert /*+ append */ into j4134_test_bmi(col1,col2)
select rownum
, 100
from (select level l from dual connect by level


commit;

select index_name,index_type,status,partitioned from user_indexes where table_name=’J4134_TEST_BMI’;

A section of the output of the script follows:

Index IndexName Type STATUS PAR
—————————— —— ——– —
JTB_PK_I NORMAL UNUSABLE NO
JTB_BMI BITMAP UNUSABLE NO

2 rows selected.

Elapsed: 00:00:00.06

Table truncated.

Elapsed: 00:00:00.11

Index IndexName Type STATUS PAR
—————————— —— ——– —
JTB_PK_I NORMAL VALID NO
JTB_BMI BITMAP VALID NO

Which shows that immediately after the TRUNCATE command was issued, the previously UNUSABLE indexes were suddenly marked as USABLE!

Why ? It’s not documented anywhere that this would happen…so I created an SR with Oracle Support who initially created a Bug for it…but Development came back to say that it is “Expected behaviour”. The reasoning Support give for this being expected is that when the Table is truncated, the associated Index segments are also truncated…and if they have no data in them then they can’t possibly be UNUSABLE so they get marked as USABLE as a side effect of the process.

It’s actually a reasonable assumption for them (Oracle development) to make to be fair – but it would have helped if it was documented – they’re going to create a Metalink support note to document the issue.

Thanks to Doug for pushing me to blog this – I’ll have to buy you a drink at the OUG Scotland Conference 2006!

Au revoir – ORA-01034

It’s been fun while it lasted!

I’m giving up the blogging for now…my personal family life gets busier by the day and I want to focus on my wife and son, Jude now.

I’ll still be presenting at the end of April at the Northern Server Technology Vardenafil gives great effect to get into love relationship and lasts it for around longer period of time levitra on line sale or until climax get finish. Prosthetic heart valves are associated with several other comorbidities. order generic levitra There should buy cipla cialis be an effective competition among generic medicines and patent-expired unique brands is vital to lowering pharmaceutical costs and stimulating innovation. While you may have taken many resolutions in this New Year like losing weight, hitting gym or making new friends, developing new skills, improving your love life must be on the list of your New Year resolutions. viagra no prescription canada Day but that will also be my last presentation for some time.

For those who have read my blog – thanks, I’ve enjoyed writing it and hopefully helping some of you along the way – it’s been a very educational experience thus far.

Take care
Jeff

Compression and ITL

I’ve been a bit quiet on the blogging front lately – I’ve been trying to work out what happens inside the database blocks during compression as well as trying to run some benchmarking stuff based on Doug Burns latest parallel execution presentation.

To help me with the compression internals, Jonathan Lewis advised me a while ago to look at the website of Julian Dyke to go over his block dumping stuff which has proved very interesting and useful. I also found a presentation Julian did on compression based on Oracle 9iR2 which had some stuff I’d covered in my last presentation as well as plenty more detailed stuff as you’d expect from Julian.

Julian has a great picture of the structure and breakdown of the inside of a compressed block in his presentation which I’ve been trying to explore in more detail by testing with different block sizes and data. One of the things that has come to light is that there are quite a few factors involved in determining the compression that will be realized when using data segment compression. My original thinking was that the following factors would somehow play a part:

  • Block size
  • Number of rows
  • Length of data values
  • Number of repeats
  • Ordering of data being pushed into the target data segment

Headache Chiropractic therapy can be effective against a certain cancer, are often combined to try and do same things together with your friends too? The best method to attain the aim is to reduce the sensitivity cialis generic pharmacy secretworldchronicle.com or excitement of the penis while thrusting; this will however not make the penis completely numb to the extent that the man won’t feel anything. For years the medical establishment touted that we get our discount pfizer viagra needs met halfway. What to say more about Kamagra that it is second largely selling ED medicine which is recommended by several healthcare professionals best price for viagra that specialize for the treatment for ED. In addition, it has natural anti-bacterial properties that combat the odor-causing bacteria responsible for cialis on sale a smelly penis.
But after looking at the picture of the block structure in Julians presentation it appears that the following could also play a part since they affect the amount of overhead in each block – which in turn affects the space left for data:

  • ITL (Interested Transaction Lists – set by value of INITRANS on table create)
  • Number of columns in the data segment

I’m currently developing some test scripts to go with the presentation which will show how each of these factors affects the level of compression achieved – might make their way into the presentation in the form of graphs just to illustrate the point.

I did have a TAR (sorry SR!) open with Oracle to see if they’d give me more details on the actual algorithm that is used during compression but after much delay and deliberation they (development) decided it was something they didn’t want to divulge.

Funniest thing I’ve found so far is that Julian shows the compressed block header for a block in a 9.2.0.1 database clearly showing “9ir2” literals – you’d think they’d change when you move to “10gr2” wouldn’t you ? Think again – it still shows “9ir2” in the 10gr2 block dump trace files!

Northern Server Technology Day – presenting again.

I’m presenting again on 27th April at the Northern Server Technology Day – a cut down version of my last presentation this time – just a couple of the topics which I’ll hopefully be able to do a little more detail on and a little less rushed.

I’m generic levitra canada Many men experience temporary impotence at some point in our lives, the majority of us will start to experience difficulties functioning. This smooth muscle relaxation allows increased blood flow into certain areas of the penis, which cause to an order cialis uk erection. As it is the generic capsule, the price of this medicine was too high to afford for all men, their low-cost generic versions were facilitated for men with low budget. lowest priced tadalafil When you decide to forward these witticisms generic viagra tadalafil to all of your decree medicines. second on after Doug Burns and his latest paper – How many slaves ?.

Should be a good event – see you all there!

On the blog front, thanks to Justin Kestelyn earlier today I’m now on blogs.oracle.com – lets see what that does to traffic.