tag:blogger.com,1999:blog-14654018.comments2023-07-18T12:55:13.959+01:00The /*+Go-Faster*/ Oracle BlogDavid Kurtzhttp://www.blogger.com/profile/00468908370233805717noreply@blogger.comBlogger42125tag:blogger.com,1999:blog-14654018.post-11109218385198338472023-07-18T12:55:13.959+01:002023-07-18T12:55:13.959+01:00As I set out in the blog, if atomic_refresh=>FA...As I set out in the blog, if atomic_refresh=>FALSE, then the MV is truncated. TRUNCATE TABLE is DDL and therefore includes a commit, so the MV contains no data while the refresh runs.<br />This is still the case in 19c.<br />If you cannot live with that but still want the advantages of non-atomic refresh then you could use something like my approach in the section "Non-Atomic Refresh Without Temporarily Losing the Data" where I have two identical materialized views and I refresh them alternatively.<br />David Kurtzhttps://www.blogger.com/profile/08139761793598085235noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-40009776842067006062023-07-11T07:33:58.306+01:002023-07-11T07:33:58.306+01:00Hi
we are using atomic_refresh=false while refres...Hi <br />we are using atomic_refresh=false while refreshing MV,so it will truncate and insert data into MV .<br />my doubt was ,while refreshing the MV will the Data be available to other users or not ?<br />we are using oracle 19c version<br />can you Please let me knowKishorehttps://www.blogger.com/profile/13251339724810963104noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-82443974141102555412021-06-08T12:03:07.230+01:002021-06-08T12:03:07.230+01:00Thank youThank youAnonymoushttps://www.blogger.com/profile/15132864600642495372noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-32242448470816996852021-04-27T11:42:16.194+01:002021-04-27T11:42:16.194+01:00Thank YouThank YouAnonymoushttps://www.blogger.com/profile/15132864600642495372noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-6015327525372741602021-04-01T13:16:49.332+01:002021-04-01T13:16:49.332+01:00Thank you for sharing your blog, seems to be usefu...Thank you for sharing your blog, seems to be useful information can’t wait to dig deep!Appsianhttps://www.blogger.com/profile/01063540998194332779noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-5228641741365287592020-06-11T14:17:28.521+01:002020-06-11T14:17:28.521+01:00https://livesql.oracle.com/apex/livesql/file/conte...https://livesql.oracle.com/apex/livesql/file/content_JH25JL4RR5OLFE6XSYEOJUAKM.htmlNeelesh Shahhttps://www.blogger.com/profile/00197008550447011993noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-55378731143954179822020-06-06T18:10:51.591+01:002020-06-06T18:10:51.591+01:00Thank You.
Regards.
GG
Thank You.<br />Regards.<br />GG<br />GGhttps://www.blogger.com/profile/13740547005057333277noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-69168477393531138172020-06-04T15:45:49.597+01:002020-06-04T15:45:49.597+01:00Yes, the performance comparison is done with DBA_H...Yes, the performance comparison is done with DBA_HIST_SQL_STAT<br /><br />--define time window for scenarios in PLAN_TABLE--<br />alter session set nls_date_Format = 'hh24:mi:ss dd.mm.yyyy';<br />Delete from plan_table where statement_id = 'SCENARIOS';<br />Commit;<br />Insert into plan_table (statement_id, plan_id, remarks, object_owner, partition_start, partition_stop)<br />VALUES ('SCENARIOS', 1, 'With Default Indexes', 'SOE', '20200317-115453', '20200317-125453');<br />Insert into plan_table (statement_id, plan_id, remarks, object_owner, partition_start, partition_stop)<br />VALUES ('SCENARIOS', 2, 'With Default Indexes', 'SOE', '20200317-130500', '20200317-140503');<br /><br />--report on scenarios<br />Set autotrace off pages 99 lines 160<br />Column sql_id format a13<br />column scenario_id heading 'Test|ID' format 999<br />column plan_hash_value heading 'SQL Plan|Hash Value'<br />column optimizer_cost heading 'Opt.|Cost' format 9999999<br />column num_executions heading 'Num|Execs' format 9999999<br />column elapsed_time heading 'Elapsed|Time' format 99999.99<br />column avg_elapsed_time heading 'Average|Elapsed|Time' format 99.9999<br />column pct_time heading '%|Time|Diff' format 999999<br />column pct_num_execs heading '%Num|Execs|Diff' format 999999<br />column plan_cf heading '?' format a1<br />break on report<br />with s as (<br />Select plan_id scenario_id, object_owner, remarks <br />, TO_DATE(partition_start,'yyyymmdd-hh24miss') begin_date<br />, TO_DATE(partition_stop,'yyyymmdd-hh24miss') end_date<br />From plan_table<br />Where statement_id = 'SCENARIOS'<br />), x as (<br />select s.scenario_id, t.sql_id, t.plan_hash_value<br />, MAX(t.optimizer_cost) optimizer_cost<br />, sum(t.executions_delta) num_executions<br />, sum(t.elapsed_time_delta)/1e6 elapsed_time<br />, sum(t.elapsed_time_delta)/sum(t.executions_delta)/1e6 avg_elapsed_time<br />from s<br />, dba_hist_Snapshot x<br />, dba_hist_sqlstat t<br />where x.begin_interval_time < s.end_date<br />and x.end_interval_time > s.begin_date<br />and x.dbid = t.dbid<br />and x.instance_number = t.instance_number<br />and x.snap_id = t.snap_id<br />and t.executions_delta>0<br />and t.parsing_Schema_name = s.object_owner<br />group by s.scenario_id, t.sql_id, t.plan_hash_value<br />)<br />select x1.*<br />, CASE WHEN x1.plan_hash_value = x2.plan_hash_value THEN '=' ELSE '!' END plan_cf<br />, x2.*<br />, (x2.avg_elapsed_time/NULLIF(x1.avg_elapsed_time,0)-1)*100 pct_time<br />, (x2.num_executions/NULLIF(x1.num_executions,0)-1)*100 pct_num_execs<br />from x x1<br /> full outer join x x2<br /> on x1.sql_id = x2.sql_id<br /> and x2.scenario_id = 2 --change this--<br />where x1.scenario_id = 1 --change this--<br />order by x2.elapsed_time desc nulls last, x1.elapsed_time desc nulls last<br />fetch first 30 rows only<br />/David Kurtzhttps://www.blogger.com/profile/08139761793598085235noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-36128544462528820982020-06-03T20:50:32.913+01:002020-06-03T20:50:32.913+01:00Indeed, great write up.
Especially the sql perform...Indeed, great write up.<br />Especially the sql performance comparison script looks pretty clean.<br />Is it relies only on dba_hist_sqlstat ?<br />Regards.<br />Greg<br />GGhttps://www.blogger.com/profile/13740547005057333277noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-19347253535348725252020-06-03T16:22:52.089+01:002020-06-03T16:22:52.089+01:00Good old-fashioned response-time-driven manual tun...Good old-fashioned response-time-driven manual tuning. Use ASH to find the statements that accounted for the most time, look at the execution plans, use ASH to confirm which lines of the plan consumed the most time. In this case, I had dropped the secondary indexes, so I knew the problem was likely to be a missing or inadequate index. Then it was just a matter of adding indexes to satisfy the problem statements. I would make one change at a time and then retest. I only needed four changes.<br />I'd like to think it is more-or-less Millsap Method-R.David Kurtzhttps://www.blogger.com/profile/08139761793598085235noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-12778884548339007142020-06-03T16:06:36.230+01:002020-06-03T16:06:36.230+01:00nice write up and investigation.
On the Manual Ind...nice write up and investigation.<br />On the Manual Index tuning, what was your method to identify these indexex?<br />Thanks<br />Kyle<br />Kyle Haileyhttps://www.blogger.com/profile/13586511268045480856noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-43294307903365903562020-01-30T13:07:15.083+00:002020-01-30T13:07:15.083+00:00Hello!
In 12c there is a new column in all the re...Hello!<br /><br />In 12c there is a new column in all the relevant tables: FULL_PLAN_HASH_VALUE.<br />It is not present in DBMS_XPLAN output, but that value is different for the different plans. (I did a quick test)<br />Use it. It will solve the problem you described in this post.<br /><br /><br />v$sql.PLAN_HASH_VALUE:<br />Numeric representation of the **current** SQL plan for this cursor. Comparing one PLAN_HASH_VALUE to another easily identifies whether or not two plans are the same (rather than comparing the two plans line by line).<br /><br />v$sql.FULL_PLAN_HASH_VALUE:<br />Numeric representation of the **complete** SQL plan for this cursor. Comparing one FULL_PLAN_HASH_VALUE to another easily identifies whether or not two plans are the same (rather than comparing the two plans line by line). Note that the FULL_PLAN_HASH_VALUE cannot be compared across databases releases. It is not backward compatible.<br /><br /><br />Cheers,<br />RobKvauhttps://www.blogger.com/profile/12484939305907291478noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-18817557127479883902020-01-15T11:52:35.733+00:002020-01-15T11:52:35.733+00:00Oh! Interesting quirk!
Of course, a data load onl...Oh! Interesting quirk!<br /><br />Of course, a data load only gives you partial stats (e.g. no histograms) with the online gather anyway (you're supposed to perform a gather_table_stats with option=>'gather auto' following the load) so it seems strange to disable all of the stats given method_opt restrictions wouldn't be used anyway.ChandlerDBAhttps://www.blogger.com/profile/07964816808383767644noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-69747613667354996942019-11-12T10:42:56.914+00:002019-11-12T10:42:56.914+00:00Hi David,
You are referring to this 11.2 link:
h...Hi David, <br /><br />You are referring to this 11.2 link:<br />https://docs.oracle.com/cd/E18283_01/server.112/e17120/dbrm004.htm#CHDBHBFD<br />which should be <b>11.2.0.2</b> based on the What's new section in the contents:<br /><i>Oracle Database 11g Release 2 (<b>11.2.0.2</b>) New Features in the Administrator's Guide</i><br /><br />The default 11.2 documentation on docs.oracle.com points to:<br />https://docs.oracle.com/html/E25494_01/dbrm002.htm#r12c1-t15<br />that has notes about those wildcards:<br /><i>You can use wildcards for the value of most attributes in the value parameter in the SET_CONSUMER_GROUP_MAPPING procedure.</i><br /><br />It should be <b>11.2.0.3</b> at least based on the What's new section (I guess they may have not updated the documentation for 11.2.0.4):<br /><i>Oracle Database 11g Release 2 (<b>11.2.0.3</b>) New Features in the Administrator's Guide</i><br /><br />There is also a note that says that those wildcards were available in 11.2, and it mentions a documentation bug:<br />Not able to use wildcards to specify service_name value Of Resource Consumer Group Mapping Rule (Doc ID 1992704.1)<br /><br />Those wildcards might even work in 10.2.0.4 since that version is mentioned in the MOS note above, but I have neither 10.2 nor 11.1/11.2 in my shop to confirm that.<br /><br />Regards,<br />Mikhail.Mikhail Velikikhhttps://www.blogger.com/profile/12809585068205118395noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-51603497323515029302019-11-02T21:16:13.641+00:002019-11-02T21:16:13.641+00:00Hi David, there is a better way to import a dump f...Hi David, there is a better way to import a dump for the same DBID with overlaping snapshot ranges, just modify awrload.sql and add the parameter NEW_DBID when calling "move_to_awr": dbms_swrf_internal.move_to_awr(schname => :schname, NEW_DBID=>'12345678'). The new versions of awrload.sql (12c+) include this additional parameter.Anonymoushttps://www.blogger.com/profile/01621464070058099900noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-356057476509968872019-09-06T15:45:39.906+01:002019-09-06T15:45:39.906+01:00Nice one! There's a TIMESTAMP column in DBA_HI...Nice one! There's a TIMESTAMP column in DBA_HIST_SQL_PLAN (and v$sql_plan), meaning "Date and time when the execution plan was generated" according to the docs, so at least you'd know when the plan was built.<br />Tanelhttps://www.blogger.com/profile/15271742143702980932noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-77284178587893329152019-07-31T22:12:50.171+01:002019-07-31T22:12:50.171+01:00Yes, and you could import them into a single conta...Yes, and you could import them into a single container with different DBIDs using Kyle Hailey's <a href="https://sites.google.com/site/oraclemonitor/change-awr-dbid" rel="nofollow">https://sites.google.com/site/oraclemonitor/change-awr-dbid</a> script.<br />See: <a href="http://dboptimizer.com/2011/04/16/importing-multiple-databases-awr-repositories/" rel="nofollow">http://dboptimizer.com/2011/04/16/importing-multiple-databases-awr-repositories/</a>.David Kurtzhttps://www.blogger.com/profile/08139761793598085235noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-70201548983310387132019-07-17T08:58:11.554+01:002019-07-17T08:58:11.554+01:00As it appeared, Oracle XE 18c can be used to load ...As it appeared, Oracle XE 18c can be used to load AWR dumps, moreover, two dumps with overlapping snapshot ranges can be load into different pluggable databases of the same container and being analyzed by some custom tools.YuriAPhttps://www.blogger.com/profile/14781383870484872725noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-10241157133352379192018-11-19T08:16:13.652+00:002018-11-19T08:16:13.652+00:00Thank you, Patrick. It is a thought provoking que...Thank you, Patrick. It is a thought provoking question. Yes, you would expect developers not to include unnecessary tables. However, I think it depends on how the SQL is written. <br />For example, if you created reporting views to pre-join the tables but didn't filter or report on one of the tables in the view then you would get join elimination. <br />So, I think there are cases where this feature is of value, but some systems will make more use of it than others.David Kurtzhttps://www.blogger.com/profile/08139761793598085235noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-1549691118288875292018-11-14T12:31:31.755+00:002018-11-14T12:31:31.755+00:00Thanks for blogging about this David.
What I'...Thanks for blogging about this David.<br /><br />What I'm struggling with is that I can't think of a genuine reason why you would ever be querying a parent table without referencing any attributes on it. So, in your example, the inclusion of the customer table in the FROM and the WHERE clauses isn't necessary. Am I missing something?<br /><br />Patrick Hurley<br />Twitter: @phurleyUnknownhttps://www.blogger.com/profile/03639839027443961918noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-43986891938187304042018-04-27T17:24:35.623+01:002018-04-27T17:24:35.623+01:00I have examined clustering by linear order in 12.1...I have examined clustering by linear order in 12.1.0.2. I ordered segments by columns frequently used in equality predicates, that have multiple distinct values, and that are not partitioning columns.<br />I found that clustering does improve the compression ratio of Hybridge Columnar Compressed objects, without apparently increasing the CPU overhead of decompression. There was no advantage for segments that were not compressed. However, caution is required because clustering it sorts the data. This can be a significant overhead on larger segments when the data is loaded. <br />In my tests, I only clustered smaller objects. I did see an improvement in query performance for only a small increase in load time. I did not test zone maps because I was not able to get them to work in conjunction with fast-refreshing materialized views in non-atomic mode.David Kurtzhttps://www.blogger.com/profile/00468908370233805717noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-62472033689358187842018-04-20T08:44:28.140+01:002018-04-20T08:44:28.140+01:00Hello David
Vert informative thanks.
To complement...Hello David<br />Vert informative thanks.<br />To complement your post: <br />- HCC is also a ailable for all Enterprise Edition editions with the Oracle Database Cloud Service<br />- Did you try to increase the compression ratios using Clustering attributes and Zone Maps (would require Partitioning option) and this is available since 12.1.0.2<br /><br />Thanks<br />Loïc Lefèvrehttps://www.blogger.com/profile/03253006122037939972noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-88054811083918392732017-12-09T14:23:47.280+00:002017-12-09T14:23:47.280+00:00On-line redefinition is certainly an option for al...On-line redefinition is certainly an option for altering a table. For an index, you can make some changes with an online rebuild, but reversing it out also takes time. The point here is that you can switch it back and forth, and then when you are satisfied drop the unnecessary index.David Kurtzhttps://www.blogger.com/profile/08139761793598085235noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-46941955016448666462017-12-09T08:08:12.015+00:002017-12-09T08:08:12.015+00:00Hi,
thank You for the informative post.
Would You...Hi, <br />thank You for the informative post.<br />Would You consider DBMS_REDEFINITION as an alternative ?<br />Regards .<br />G<br />Anonymoushttps://www.blogger.com/profile/12433743926871860172noreply@blogger.comtag:blogger.com,1999:blog-14654018.post-371472426900168452017-01-08T18:29:40.510+00:002017-01-08T18:29:40.510+00:00David,
I was not referring to the role of the ext...David,<br /><br />I was not referring to the role of the extended statistics which, in the absence of the index, can help the Optimizer getting a correct estimation. I was referring to the CBO refusing to use the remaining index while it was using the previous (redondant one). The extended statistics works only with equalities and, I have seen many situations where the extended statistics are not used because the indexed columns have histogram while the resulting virtual column representing the column group is not a skewed column. In addition, the Clustering Factor of the contained index is very often not as good as that of the redondant index which when deleted might reveal that the cost of full scanning a table is cheaper than scanning the remaining index.<br /><br />I have had the occasion to set so many redondant indexes in an invisible status and have come up with the conclusion (although heuristic) mentioned in my previous e-mail where this option might work very well when the contained and the container indexes are of 1 and 2 columns respectively.<br /><br />Best regards<br />Mohamed Houri<br /><br />Best regards<br />Mohamed Mohamed Hourihttps://www.blogger.com/profile/11687776847553675567noreply@blogger.com