I tweeted a reference yesterday to a 9 year old article about index rebuilds, and this led me on to look for an item that I thought I’d written on a related topic. I hadn’t written it (so there’s another item on my todo list) but I did discover a draft I’d written a few years ago about an unpleasant side effect relating to rebuilding subpartitions of local indexes on composite partitoned tables. It’s probably the case that no-one will notice they’re suffering from it because it’s a bit of an edge case – but you might want to review the things your system does.
Here’s the scenario: you have a large table that is composite partitioned with roughly 180 daily partitions and 512 subpartitions (per partition). For some strange reason you have a couple of local indexes on the table that have been declared unusable – hoping, perhaps, that no-one ever does anything that makes Oracle decide to rebuild all the unusable bits.
One day you decide to rebuild just one subpartition of one of the indexes that isn’t marked as unusable. You might be planning to rebuild every single subpartition of that index overnight, but you’re going to start with just one to see how long it takes. Something very strange happens – and here’s a simple model to demonstrate:
rem rem Script: index_rebuild_pt_bug.sql rem Author: Jonathan Lewis rem Dated: Feb 2015 rem rem Last tested rem 18.3.0.0 rem 12.1.0.1 rem 11.2.0.4 rem create table interval_hash ( n1 number, n2 number, n3 number ) segment creation immediate partition by range(n1) interval (1) subpartition by hash (n2) subpartitions 16 ( partition p1 values less than (2) ) ; begin for i in 1 .. 16 loop for j in 1..64 loop insert into interval_hash i(n1, n2, n3) values (i, j, j + 64*(i-i)); end loop; end loop; commit; end; / create index ih_i1 on interval_hash(n1) local; create index ih_i2 on interval_hash(n2) local; create index ih_i3 on interval_hash(n3) local; alter index ih_i1 unusable; alter index ih_i2 unusable;
The code creates a table which extends as data arrives to have 16 partitions with 16 subpartitions each – for a total of 256 data segments. After loading the data I’ve created 3 local indexes on the table and made two of them unusable.
After setting up the table and indexes I’ve identified one subpartition of the table by name, enabled tracing, and rebuilt the corresponding subpartition of the index which is currently usable (the same effect appears if I rebuild a partition of one of the unusable indexes, but the phenomenon is slightly more surprising if you rebuild a usable subpartition). Here’s the code for the rebuild:
column max_subp new_value m_subp select max(partition_name) max_subp from user_segments where segment_name = 'INTERVAL_HASH' ; alter session set events '10046 trace name context forever, level 4'; alter index ih_i3 rebuild subpartition &m_subp; alter session set events '10046 trace name context off';
Would you expect to find anything interesting in the trace file after running it through tkprof ? Here’s the most frequently executed SQL statement I found when running this test on an instance of Oracle 18.3:
tkprof or18_ora_24939.trc temp sort=execnt SQL ID: 0yn07bvqs30qj Plan Hash: 866645418 select pctfree_stg, pctused_stg, size_stg,initial_stg, next_stg, minext_stg, maxext_stg, maxsiz_stg, lobret_stg,mintim_stg, pctinc_stg, initra_stg, maxtra_stg, optimal_stg, maxins_stg,frlins_stg, flags_stg, bfp_stg, enc_stg, cmpflag_stg, cmplvl_stg,imcflag_stg, ccflag_stg, flags2_stg from deferred_stg$ where obj# =:1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 512 0.02 0.03 0 0 0 0 Fetch 512 0.00 0.00 0 1536 0 512 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 1025 0.03 0.03 0 1536 0 512
This query runs once for every single subpartition of the two unusable indexes. (There’s another statement that runs once for every partition of the two unusable indexes to provide the object numbers of the subpartitions and that shouldn’t be forgotten). In my example the impact and time to run doesn’t look too bad – but when the numbers climb to a couple of hundred thousand executions before you start to rebuild the first subpartition you might start to worry. Depending on the state of your data dictionary, and how you got to the point where you had so many unusable segments, the time to execute could become large, and you might do most of it all over again for the next subpartition!
You might wonder why anyone would have a couple of unusable indexes. First, many years ago (in Practival Oracle 8i) I pointed out that if you wanted to create a new locally partitioned index you might want to create it unusable and then rebuild each partition in turn – that might not be a good idea any more. (The book also pointed out the requirement to think about sizing the dictionary cache (rowcache)).
Secondly, before the introduction of partial indexing it was possible to emulate the feature manually for local indexes by setting partitions and subpartition unusable and allowing the optimizer to use table expansion to pick the best plan for partitions that had different index partitions still usable.
Finally if you are using the new partial indexing feature of 12.2 where you can set the default characteristic of a partitioned table to “indexing off”, and the default characteristic of an index to “indexing partial”, the partitions of any local index that are not created are deemed to be deferred – but you won’t see the effect in my example unless you modify it to include partial indexes and include an “alter system flush shared pool” just before the rebuild.
Footnote
I’ve found this bug (or something very similar) on MoS: Bug 17335646 : ALTER INDEX IDX REBUILD SUBPARTITION SP VISITS EVERY INDEX AND SUBPARTION. However the bug was recorded against 11.2.0.3 and its status is: “31 – Could Not Reproduce. To Filer”. It does seem to be terribly easy to reproduce, though, provided you have a large number of unusable subpartitions in your indexes – so it’s possible the original bug appeared even when there were no unusable subpartitions (the customer comments about the bug don’t give any suggestion that there might be unusable indexes in place – and it seems unlikely that the 22 indexes mentioned were all unusable).