RMOUG TD2021

Thursday, February 18, 2021

Spatial Data 3. Analyse a track in proximity to a GPS route

This blog is part of a series about my first steps using Spatial Data in the Oracle database.  I am using the GPS data for my cycling activities collected by Strava.

Swain's Lane, Highgate
Now I have loaded some data, I am going to start to do something useful with it.  I go out on my bike most mornings, and I usually ride up Swain's Lane in Highgate three times.  How long did each one take?  Over time, have I got faster or slower?

I need a definition of Swain's Lane that I can compare to.  I will start by drawing a route with my favourite GPS software.  A route is just a sequence of route points.  I can then export that as a GPX file.

<?xml version="1.0" encoding="UTF-8"?>
<gpx xmlns="http://www.topografix.com/GPX/1/1" version="1.1" creator="ViewRanger - //www.viewranger.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.topografix.com/GPX/1/1 http://www.topografix.com/GPX/1/1/gpx.xsd">
  <rte>
    <name><![CDATA[Swain's World]]></name>
    <rtept lat="51.569613039632" lon="-0.14770468632509"></rtept>
    <rtept lat="51.569407978151" lon="-0.14832964102552"></rtept>
    <rtept lat="51.567090552402" lon="-0.14674177328872"></rtept>
    <rtept lat="51.567080548869" lon="-0.14592101733016"></rtept>
    <rtept lat="51.569618041121" lon="-0.14773419062425"></rtept>
  </rte>
</gpx>

Geometries Table

I will load the GPX route into a table much as I did with the track files. 
drop table my_geometries purge;

createg table my_geometries
(geom_id    NUMBER NOT NULL 
,descr      VARCHAR2(64)
,gpx        XMLTYPE
,geom       mdsys.sdo_geometry
,geom_27700 mdsys.sdo_geometry
,mbr        mdsys.sdo_geometry
,constraint my_geometries_pk PRIMARY KEY (geom_id)
)
XMLTYPE COLUMN gpx STORE AS SECUREFILE BINARY XML (CACHE DISABLE STORAGE IN ROW)
/
The difference is that I have a series of route points instead of track points, so the paths in extract() and extractvalue() are slightly different.
delete from my_geometries where geom_id = 2;
INSERT INTO my_geometries (geom_id, descr, gpx) 
VALUES (2,'Swains World Route', XMLTYPE(strava_pkg.getClobDocument('STRAVA','swainsworldroute.gpx')));

UPDATE my_geometries
SET geom = mdsys.sdo_geometry(2002,4326,null,mdsys.sdo_elem_info_array(1,2,1),
cast(multiset(
  select CASE n.rn WHEN 1 THEN pt.lng WHEN 2 THEN pt.lat END ord
  from (
    SELECT /*+MATERIALIZE*/ rownum rn
    ,      TO_NUMBER(EXTRACTVALUE(VALUE(t), 'rtept/@lon')) as lng
    ,      TO_NUMBER(EXTRACTVALUE(VALUE(t), 'rtept/@lat')) as lat
    FROM   my_geometries g,
           TABLE(XMLSEQUENCE(extract(g.gpx,'/gpx/rte/rtept','xmlns="http://www.topografix.com/GPX/1/1"'))) t
    where g.geom_id = 2
    ) pt,
    (select 1 rn from dual union all select 2 from dual) n
	order by pt.rn, n.rn
  ) AS mdsys.sdo_ordinate_array))
WHERE gpx IS NOT NULL
AND   geom IS NULL
/
UPDATE my_geometries
SET mbr = sdo_geom.sdo_mbr(geom)
,   geom_27700 = sdo_cs.transform(geom,27700)
/

Commit;
Set pages 99 lines 180 
Select geom_id, descr, gpx, geom 
from my_geometries
where geom_id = 2;

   GEOM_ID DESCR
---------- ----------------------------------------------------------------
GPX
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
         2 Swains World Route
<?xml version="1.0" encoding="US-ASCII"?>
<gpx xmlns="http://www.topografix.com/GPX/1/1" version="1.1" creator="ViewRanger - //www.viewranger.com" xml
SDO_GEOMETRY(2002, 4326, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(-.14651114, 51.5670769, -.14649237, 51.567298, -.1465782, 51.567563, -.14680618, 51.5680165, -.14697
516, 51.5682533, -.14754379, 51.5688701, -.14807219, 51.5694887))
I am going to build spatial indexes on the geometry columns, so I need to define the upper and lower bound values on the coordinates.
delete from user_sdo_geom_metadata where table_name = 'MY_GEOMETRIES';
insert into user_sdo_geom_metadata (table_name,column_name,diminfo,srid)
values ( 
  'MY_GEOMETRIES' , 'GEOM_27700',
  sdo_dim_array(
    sdo_dim_element('Easting',-1000000,1500000,0.05), 
    sdo_dim_element('Northing', -500000,2000000,0.05)),
  27700);
insert into user_sdo_geom_metadata (table_name,column_name,diminfo,srid)
values ( 
  'MY_GEOMETRIES' , 'GEOM',
  sdo_dim_array(
    sdo_dim_element('Longitude',-180,180,0.05), 
    sdo_dim_element('Latgitude',-90,90,0.05)),
  4326);
insert into user_sdo_geom_metadata (table_name,column_name,diminfo,srid)
values ( 
  'MY_GEOMETRIES' , 'MBR',
  sdo_dim_array(
    sdo_dim_element('Longitude',-180,180,0.05), 
    sdo_dim_element('Latgitude',-90,90,0.05)),
  4326);
commit;

CREATE INDEX my_geometries_geom ON my_geometries (geom) INDEXTYPE IS MDSYS.SPATIAL_INDEX_v2;
CREATE INDEX my_geometries_geom_27700 ON my_geometries (geom_27700) INDEXTYPE IS MDSYS.SPATIAL_INDEX_v2;
CREATE INDEX my_geometries_mbr ON my_geometries (mbr) INDEXTYPE IS MDSYS.SPATIAL_INDEX_v2;

Compare Geometries

Now I can compare my Swain's Lane geometry to my activity geometries.  Let's start by looking for rides in December 2020 that went up Swain's Lane
Column activity_id heading 'Activity|ID'
Column activity_name format a30
Column geom_relate heading 'geom|relate' format a6
With a as (
SELECT a.activity_id, a.activity_date, a.activity_name
,      SDO_GEOM.RELATE(a.geom,'anyinteract',g.geom,25) geom_relate
FROM   activities a
,      my_geometries g
WHERE  a.activity_type = 'Ride'
--And    a.activity_id IN(4468006769)
And    a.activity_date >= TO_DATE('01122020','DDMMYYYY')
and    g.geom_id = 2 /*Swains World Route*/
)
Select *
From   a
Where  geom_relate = 'TRUE'
Order by activity_date
/
Where there is a relation between the two geometries then I have a hit.
  Activity                                                    geom
        ID ACTIVITY_DATE       ACTIVITY_NAME                  relate
---------- ------------------- ------------------------------ ------
4419821750 08:44:45 02.12.2020 Loop                           TRUE
4428307816 10:49:25 04.12.2020 Loop                           TRUE
4431920358 09:41:13 05.12.2020 Loop                           TRUE
…
4528825613 09:39:38 28.12.2020 Loop                           TRUE
4534027888 11:29:45 29.12.2020 Loop                           TRUE
4538488655 09:57:55 30.12.2020 Loop                           TRUE

25 rows selected.

Analyse Individual Efforts

Now I want to analyse each of my trips up Swain's Lane on a particular day.  I am going to work with the GPX rather than the spatial geometry because I am interested also in time, elevation and heart rate data that is not stored in the spatial geometry.
Also, you can't use analytic functions on spatial geometries.
with x as (
SELECT activity_id
,      TO_DATE(EXTRACTVALUE(VALUE(t), 'trkpt/time'),'YYYY-MM-DD"T"HH24:MI:SS"Z"') time
,      TO_NUMBER(EXTRACTVALUE(VALUE(t), 'trkpt/@lat')) lat
,      TO_NUMBER(EXTRACTVALUE(VALUE(t), 'trkpt/@lon')) lng
FROM   activities a,
       TABLE(XMLSEQUENCE(extract(a.gpx,'/gpx/trk/trkseg/trkpt','xmlns="http://www.topografix.com/GPX/1/1"'))) t
WHERE  a.activity_id IN(4468006769)
), y as (
select x.*, strava_pkg.make_point(lng,lat) loc
from x
)
select lag(loc,1) over (partition by activity_id order by time) last_loc
from   y
/

select lag(loc,1) over (partition by activity_id order by time) last_loc
           *
ERROR at line 13:
ORA-22901: cannot compare VARRAY or LOB attributes of an object type
Instead, I will have to apply analytic functions to the values extracted from the GPX and then create a spatial point.  Thus I will be able to calculate the length of each individual trip by aggregating the distance between each pair of points.
The following query splits out each trip up Swain's Lane in a particular activity and shows the distance, duration, and metrics about elevation, gradient, and heart rate. 
alter session set statistics_level=ALL;
alter session set nls_date_Format = 'hh24:mi:ss dd.mm.yyyy';
break on activity_id skip 1
compute sum of sum_dist on activity_id
compute sum of num_pt on activity_id
compute sum of sum_secs on activity_id
Set lines 180 pages 50 timi on
Column activity_id heading 'Activity|ID'
Column activity_name format a15
column time format a20
column lat format 999.99999999
column lng format 999.99999999
column ele format 9999.9
column hr format 999
column sdo_relate format a10
column num_pts heading 'Num|Pts' format 99999
column sum_dist heading 'Dist.|(km)' format 999.999
column sum_secs heading 'Secs' format 9999
column avg_speed heading 'Avg|Speed|(kmph)' format 99.9
column ele_gain heading 'Ele|Gain|(m)' format 9999.9
column ele_loss heading 'Ele|Loss|(m)' format 9999.9
column avg_grade heading 'Avg|Grade|%' format 99.9
column min_ele heading 'Min|Ele|(m)' format 999.9
column max_ele heading 'Max|Ele|(m)' format 999.9
column avg_hr heading 'Avg|HR' format 999
column max_hr heading 'Max|HR' format 999
WITH geo as ( /*route geometry to compare to*/
select /*MATERIALIZE*/ g.*, 25 tol
,      sdo_geom.sdo_length(geom, unit=>'unit=m') geom_length
from   my_geometries g
where  geom_id = 2 /*Swains World Route*/
), a as ( /*extract all points in activity*/
SELECT a.activity_id, g.geom g_geom, g.tol, g.geom_length
,      TO_DATE(EXTRACTVALUE(VALUE(t), 'trkpt/time'),'YYYY-MM-DD"T"HH24:MI:SS"Z"') time
,      TO_NUMBER(EXTRACTVALUE(VALUE(t), 'trkpt/@lat')) lat
,      TO_NUMBER(EXTRACTVALUE(VALUE(t), 'trkpt/@lon')) lng
,      TO_NUMBER(EXTRACTVALUE(VALUE(t), 'trkpt/ele')) ele
,      TO_NUMBER(EXTRACTVALUE(VALUE(t), 'trkpt/extensions/gpxtpx:TrackPointExtension/gpxtpx:hr'
       ,'xmlns="http://www.topografix.com/GPX/1/1" xmlns:gpxtpx="http://www.garmin.com/xmlschemas/TrackPointExtension/v1"')) hr
FROM   activities a,
       geo g,
       TABLE(XMLSEQUENCE(extract(a.gpx,'/gpx/trk/trkseg/trkpt'
       ,'xmlns="http://www.topografix.com/GPX/1/1" xmlns:gpxtpx="http://www.garmin.com/xmlschemas/TrackPointExtension/v1"'))) t
Where  a.activity_id IN(4468006769)
and    SDO_GEOM.RELATE(a.geom,'anyinteract',g.geom,g.tol) = 'TRUE' /*activity has relation to reference geometry*/
), b as ( /*smooth elevation*/
Select a.*
,      avg(ele) over (partition by activity_id order by time rows between 2 preceding and 2 following) avg_ele
From   a
), c as ( /*last point*/
Select b.*
,      row_number() over (partition by activity_id order by time) seq
,      lag(time,1) over (partition by activity_id order by time) last_time
,      lag(lat,1) over (partition by activity_id order by time) last_lat
,      lag(lng,1) over (partition by activity_id order by time) last_lng
--,      lag(ele,1) over (partition by activity_id order by time) last_ele
,      lag(avg_ele,1) over (partition by activity_id order by time) last_avg_ele
From   b
), d as ( /*make points*/
SELECT c.* 
,      strava_pkg.make_point(lng,lat) loc
,      strava_pkg.make_point(last_lng,last_lat) last_loc
FROM   c
), e as ( /*determine whether point is inside the polygon*/
select d.*
,      86400*(time-last_time) secs
,      avg_ele-last_avg_ele ele_diff
,      sdo_geom.sdo_distance(loc,last_loc,0.05,'unit=m') dist
,      SDO_GEOM.RELATE(loc,'anyinteract', g_geom, tol) sdo_relate
FROM   d
), f as (
select e.*
,      CASE WHEN sdo_relate != lag(sdo_relate,1) over (partition by activity_id order by time) THEN 1 END sdo_diff
from   e
), g as (
select f.*
,      SUM(sdo_diff) over (partition by activity_id order by time range between unbounded preceding and current row) sdo_seq
from f
where  sdo_relate = 'TRUE'
)
select activity_id, min(time), max(time)
, sum(dist)/1000 sum_dist
, sum(secs) sum_secs
, 3.6*sum(dist)/sum(secs) avg_speed
, sum(greatest(0,ele_diff)) ele_gain
, sum(least(0,ele_diff)) ele_loss
, 100*sum(ele_diff*dist)/sum(dist*dist) avg_grade
, min(ele) min_ele
, max(ele) max_ele
, sum(hr*secs)/sum(secs) avg_Hr
, max(hr) max_hr
, count(*) num_pts
from   g
group by activity_id, sdo_seq, g.geom_length
having sum(dist)>= g.geom_length/2 /*make sure line we find is longer than half route to prevent fragmentation*/
order by 2
/
select * from table(dbms_xplan.display_cursor(null,null,'ADVANCED +IOSTATS -PROJECTION +ADAPTIVE'))
/
4a_1swains.sql
  • In subquery a, I compare the geometry of the activity with the geometry of Swain's Lane using sdo_geom.relate() to confirm that the activity includes Swain's Lane, but then I extract all the points in the activity GPX.
  • GPS is optimised for horizontal accuracy.  Even so, the tolerance for determining whether the track is close to the route has to be set to 25m to allow for noise in the data (Swain's Lane is tree-lined, and has walls on both sides, that both attenuate the GPS signal).  GPS elevation data is notorious for being noisy even under good conditions; you can see this in the variation of height gained on each ascent.  Sub-query b calculates an average elevation across 5 track points (up to +/-2 points).  
  • I need to compare each point in the track to each previous point so I can do some calculations and determine when the track comes into proximity with the Swain's Lane route, subquery c uses analytic functions to determine the previous point.  It is not possible to apply the analytic function to a geometry.
  • Subquery e determines whether a track point is in proximity to the route.  The tolerance, 25m, is set in subquery geo.  Then subquery f flags where the track point is in proximity to the route and the previous one was not.  Finally, subquery g maintains a running total of the number of times the track has gone close enough to the route.  That becomes a sequence number for each ascent of Swain's Lane by which I can group the subsequent analytics.
                                                                     Avg     Ele     Ele   Avg    Min    Max
  Activity                                            Dist.        Speed    Gain    Loss Grade    Ele    Ele  Avg  Max   Num
        ID MIN(TIME)           MAX(TIME)               (km)  Secs (kmph)     (m)     (m)     %    (m)    (m)   HR   HR   Pts
---------- ------------------- ------------------- -------- ----- ------ ------- ------- ----- ------ ------ ---- ---- -----
4468006769 14:55:51 13.12.2020 14:58:17 13.12.2020     .372   147    9.1    36.1      .0   8.6   86.8  122.7  141  153   147
           15:08:13 13.12.2020 15:10:28 13.12.2020     .374   136    9.9    36.2      .0   8.4   86.8  122.8  147  155   136
           15:22:49 13.12.2020 15:25:18 13.12.2020     .369   150    8.9    36.2      .0   8.2   86.8  122.7  147  155   150
**********                                         -------- -----
sum                                                   1.116   433
On my laptop, this query takes about 10s, of which about 8s is spent on the window sort for the analytic functions, and 2s is spent working out whether the track points are in proximity to the route.
Plan hash value: 3042349692

---------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name             | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                  |      2 |        |       |       |  5147 (100)|          |      6 |00:00:20.94 |     392 |
|   1 |  SORT ORDER BY                           |                  |      2 |      1 |   104 |       |  5147   (1)| 00:00:01 |      6 |00:00:20.94 |     392 |
|*  2 |   FILTER                                 |                  |      2 |        |       |       |            |          |      6 |00:00:20.94 |     392 |
|   3 |    HASH GROUP BY                         |                  |      2 |      1 |   104 |       |  5147   (1)| 00:00:01 |      6 |00:00:20.94 |     392 |
|   4 |     VIEW                                 |                  |      2 |   8168 |   829K|       |  5144   (1)| 00:00:01 |    866 |00:00:20.93 |     392 |
|   5 |      WINDOW SORT                         |                  |      2 |   8168 |    16M|    21M|  5144   (1)| 00:00:01 |    866 |00:00:20.93 |     392 |
|*  6 |       VIEW                               |                  |      2 |   8168 |    16M|       |  1569   (1)| 00:00:01 |    866 |00:00:20.93 |     392 |
|   7 |        WINDOW SORT                       |                  |      2 |   8168 |  1403K|  1688K|  1569   (1)| 00:00:01 |  10208 |00:00:09.63 |     392 |
|   8 |         VIEW                             |                  |      2 |   8168 |  1403K|       |  1252   (1)| 00:00:01 |  10208 |00:00:06.22 |     392 |
|   9 |          WINDOW SORT                     |                  |      2 |   8168 |  1021K|  1248K|  1252   (1)| 00:00:01 |  10208 |00:00:06.20 |     392 |
|  10 |           VIEW                           |                  |      2 |   8168 |  1021K|       |  1016   (1)| 00:00:01 |  10208 |00:00:06.05 |     392 |
|  11 |            WINDOW SORT                   |                  |      2 |   8168 |  4546K|  5040K|  1016   (1)| 00:00:01 |  10208 |00:00:00.76 |     392 |
|  12 |             NESTED LOOPS                 |                  |      2 |   8168 |  4546K|       |    31   (0)| 00:00:01 |  10208 |00:00:00.41 |     392 |
|  13 |              NESTED LOOPS                |                  |      2 |      1 |   560 |       |     2   (0)| 00:00:01 |      2 |00:00:00.03 |     104 |
|  14 |               TABLE ACCESS BY INDEX ROWID| MY_GEOMETRIES    |      2 |      1 |   112 |       |     1   (0)| 00:00:01 |      2 |00:00:00.01 |       4 |
|* 15 |                INDEX UNIQUE SCAN         | MY_GEOMETRIES_PK |      2 |      1 |       |       |     0   (0)|          |      2 |00:00:00.01 |       2 |
|* 16 |               TABLE ACCESS BY INDEX ROWID| ACTIVITIES       |      2 |      1 |   448 |       |     1   (0)| 00:00:01 |      2 |00:00:00.03 |     100 |
|* 17 |                INDEX UNIQUE SCAN         | ACTIVITIES_PK    |      2 |      1 |       |       |     0   (0)|          |      2 |00:00:00.01 |       4 |
|  18 |              XPATH EVALUATION            |                  |      2 |        |       |       |            |          |  10208 |00:00:00.37 |     288 |
---------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(SUM("DIST")>="G"."GEOM_LENGTH"/2)
   6 - filter("SDO_RELATE"='TRUE')
  15 - access("GEOM_ID"=2)
  16 - filter(("A"."ACTIVITY_TYPE"='Ride' AND "SDO_GEOM"."RELATE"("A"."GEOM",'anyinteract',"G"."GEOM",25)='TRUE'))
  17 - access("A"."ACTIVITY_ID"=4468006769)
I can apply this approach to all my trips up Swain's Lane.  However, I have logged 1115 ascents, and if I attempt to process them in a single SQL query I will have to do some very large window sorts that will spill out of memory (at least they will on my machine).  Instead, it is faster to process each activity separately in a PL/SQL loop (see 4b_allswains2.sql).
I now have a table containing all of my ascents of Swain's Lane and I can see if I am getting faster or slower.  I simply dumped the data into Excel with SQL developer.  
Unfortunately, I have discovered that I am not going faster!

Friday, February 12, 2021

Spatial Data 2: Convert GPX Track to a Spatial Line Geometry

This blog is part of a series about my first steps using Spatial Data in the Oracle database.  I am using the GPS data for my cycling activities collected by Strava.

Having loaded my GPS tracks from GPX files into an XML type column, the next stage is to extract the track points and create a spatial geometry column.  

Defining Spatial Geometries

Spatial objects are generically referred to as geometries.  When you define one, you have to specify what kind of geometry it is, and what coordinate system you are using. Later when you compare geometries to each other they have to use the same coordinate system. Otherwise, Oracle will raise an error.  Fortunately, Oracle can convert between coordinate systems.

Various coordinate systems are used for geographical data, they are given EPSG Geodetic Parameter Dataset codes.  Oracle supports various coordinate systems.  As well as older definitions, it also has current definitions where the ESPG code matches the Spatial Reference ID (SDO_SRID).  They can be queried from SDO_COORD_REF_SYS.

I will use two different coordinate systems during this series of blogs

Set lines 150 pages 99
Column coord_ref_sys_name format a35
Column legacy_cs_bounds format a110
select srid, coord_ref_sys_name, coord_ref_sys_kind, legacy_cs_bounds 
from SDO_COORD_REF_SYS where srid IN(4326, 27700)
/
      SRID COORD_REF_SYS_NAME                  COORD_REF_SYS_KIND
---------- ----------------------------------- ------------------------
LEGACY_CS_BOUNDS(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
--------------------------------------------------------------------------------------------------------------
      4326 WGS 84                              GEOGRAPHIC2D
SDO_GEOMETRY(2003, 4326, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 3), SDO_ORDINATE_ARRAY(-180, -90, 180, 90))

     27700 OSGB 1936 / British National Grid   PROJECTED

  • "The World Geodetic System (WGS) is a standard for use in cartography, geodesy, and satellite navigation including GPS". The latest revision is WGS 84 (also known as WGS 1984, EPSG:4326). It is the reference coordinate system used by the Global Positioning System (GPS).  Where I am dealing with longitude and latitude, specified in degrees, especially from GPS data, I need to tell Oracle that it is WGS84 by specifying SDO_SRID of 4326.
  • Later on, I will also be using data for Great Britain available from the Ordnance Survey that uses the Ordnance Survey National Grid (also known as British National Grid) reference system.  That requires SDO_SRID to be set to 27700.

See also:

Creating Spatial Points

I have found it useful to create a packaged function to convert longitude and latitude to a spatial data point.  It is a useful shorthand that I use in various places.

create or replace package body strava_pkg as 
k_module  CONSTANT VARCHAR2(48) := $$PLSQL_UNIT;
…
----------------------------------------------------------------------------------------------------
function make_point 
(longitude in number
,latitude  in number)
return sdo_geometry deterministic is
  l_module VARCHAR2(64);
  l_action VARCHAR2(64);
begin
  dbms_application_info.read_module(module_name=>l_module
                                   ,action_name=>l_action);
  dbms_application_info.set_module(module_name=>k_module
                                  ,action_name=>'make_point');

  if longitude is not null and latitude is not null then
    return
      sdo_geometry (
        2001, 4326,
        sdo_point_type (longitude, latitude, null),
        null, null
      );
  else
    return null;
  end if;

  dbms_application_info.set_module(module_name=>l_module
                                  ,action_name=>l_action);
end make_point;
----------------------------------------------------------------------------------------------------
END strava_pkg;
/

strava_pkg.sql

There are two parameters to SDO_GEOMETRY that I always have to specify.

  • The first parameter, SDO_GTYPE, describes the natures of the spatial geometry being defined.  Here it is 2001.  The 2 indicates that it is a 2-dimensional geometry, and the 1 indicates that it is a single point.  See SDO_GEOMETRY Object Type
  • The second parameter, SDO_SRID, defines the coordinate system that I discussed above.  4326 indicates that I am working with longitude and latitude.

XML Namespace

GPS data is often held in GPX or GPS Exchange Format.  This is an XML schema.  GPX has been the de-facto XML standard for the lightweight interchange of GPS data since the initial GPX 1.0 release in 2002.  The GPX 1.1 schema was released in 2004 (see https://www.topografix.com/gpx.asp).  

Garmin has created an extension schema that holds additional athlete training information such as heart rate.

I can extract individual track points from a GPX with SQL using the extract() and extractvalue() functions.  However, I have GPX tracks that use both versions of the Topographix GPX schema (it depends on upon which piece of software emitted the GPX file), and some that also use the Garmin extensions.  

Therefore, I need to register all three schemas with Oracle.  I can download the schema files with wget.

cd /tmp/strava
wget http://www.topografix.com/GPX/1/0/gpx.xsd --output-document=gpx0.xsd
wget http://www.topografix.com/GPX/1/1/gpx.xsd
wget https://www8.garmin.com/xmlschemas/TrackPointExtensionv1.xsd

Then I can register the files 

delete from plan_table WHERE statement_id = 'XSD';
insert into plan_table (statement_id, plan_id, object_name, object_alias)
values ('XSD', 1, 'gpx0.xsd', 'http://www.topografix.com/GPX/1/0/gpx.xsd');
insert into plan_table (statement_id, plan_id, object_name, object_alias)
values ('XSD', 2, 'gpx.xsd', 'http://www.topografix.com/GPX/1/1/gpx.xsd');
insert into plan_table (statement_id, plan_id, object_name, object_alias)
values ('XSD', 3, 'TrackPointExtensionv1.xsd', 'https://www8.garmin.com/xmlschemas/TrackPointExtensionv1.xsd');

DECLARE
  xmlSchema xmlType;
  res       boolean;
BEGIN
  FOR i IN (
    SELECT object_alias schemaURL
    ,      object_name  schemaDoc
    FROM   plan_table
    WHERE  statement_id = 'XSD'
    ORDER BY plan_id
  ) LOOP
    --read xsd file
    xmlSchema := XMLTYPE(getCLOBDocument('STRAVA',i.schemaDoc,'AL32UTF8'));
    --if already exists delete XSD
    if (dbms_xdb.existsResource(i.schemaDoc)) then
        dbms_xdb.deleteResource(i.schemaDoc);
    end if;
    --create resource from XSD
    res := dbms_xdb.createResource(i.schemaDoc,xmlSchema);

    -- Delete existing  schema
    dbms_xmlschema.deleteSchema(
      i.schemaURL
    );
    -- Now reregister the schema
    dbms_xmlschema.registerSchema(
      i.schemaURL,
      xmlSchema,
      TRUE,TRUE,FALSE,FALSE
    );
  END LOOP;
End;
/
3a_register_xml_schema.sql

Then I can query the registered schemas.

Set pages 99 lines 160
Column schema_url format a60
Column qual_schema_url format a105
select schema_url, local, hier_type, binary, qual_schema_url
from user_xml_schemas
/

SCHEMA_URL                                                   LOC HIER_TYPE   BIN
------------------------------------------------------------ --- ----------- ---
QUAL_SCHEMA_URL
---------------------------------------------------------------------------------------------------------
https://www8.garmin.com/xmlschemas/TrackPointExtensionv1.xsd YES CONTENTS    NO
http://xmlns.oracle.com/xdb/schemas/STRAVA/https://www8.garmin.com/xmlschemas/TrackPointExtensionv1.xsd

http://www.topografix.com/GPX/1/0/gpx.xsd                    YES CONTENTS    NO
http://xmlns.oracle.com/xdb/schemas/STRAVA/www.topografix.com/GPX/1/0/gpx.xsd

http://www.topografix.com/GPX/1/1/gpx.xsd                    YES CONTENTS    NO
http://xmlns.oracle.com/xdb/schemas/STRAVA/www.topografix.com/GPX/1/1/gpx.xsd

Extracting GPS Track Points from GPX

A GPS track is a list of points specifying at least time, longitude, latitude and often elevation.  I can extract all the points in a GPX as a set of rows.  However, I must specify the correct namespace for the specific GPX.

Column time_string format a20
SELECT g.activity_id
,      EXTRACTVALUE(VALUE(t), 'trkpt/time') time_string
,      TO_NUMBER(EXTRACTVALUE(VALUE(t), 'trkpt/@lat')) lat
,      TO_NUMBER(EXTRACTVALUE(VALUE(t), 'trkpt/@lon')) lng
,      TO_NUMBER(EXTRACTVALUE(VALUE(t), 'trkpt/ele')) ele
,      TO_NUMBER(EXTRACTVALUE(VALUE(t), 'trkpt/extensions/gpxtpx:TrackPointExtension/gpxtpx:hr'
       ,'xmlns="http://www.topografix.com/GPX/1/1" xmlns:gpxtpx="http://www.garmin.com/xmlschemas/TrackPointExtension/v1"')) hr
 FROM activities g,
      TABLE(XMLSEQUENCE(extract(g.gpx,'/gpx/trk/trkseg/trkpt'
      ,'xmlns="http://www.topografix.com/GPX/1/1" xmlns:gpxtpx="http://www.garmin.com/xmlschemas/TrackPointExtension/v1"'
      ))) t
Where  activity_id IN(4468006769)
And rownum <= 10
/

  Activity
        ID TIME_STRING                    LAT           LNG     ELE   HR
---------- -------------------- ------------- ------------- ------- ----
4468006769 2020-12-13T14:31:13Z   51.52963800    -.18753600    30.6   57
           2020-12-13T14:31:14Z   51.52963500    -.18753400    30.6   57
           2020-12-13T14:31:15Z   51.52964100    -.18753100    30.6   57
           2020-12-13T14:31:16Z   51.52964000    -.18752900    30.6   57
           2020-12-13T14:31:17Z   51.52963600    -.18752700    30.6   57
           2020-12-13T14:31:18Z   51.52963200    -.18752700    30.6   57
           2020-12-13T14:31:19Z   51.52962900    -.18752800    30.6   57
           2020-12-13T14:31:20Z   51.52962800    -.18752800    30.6   57
           2020-12-13T14:31:21Z   51.52962800    -.18752900    30.6   57
           2020-12-13T14:31:22Z   51.52962800    -.18753000    30.6   57

I can use this approach to extract all the points from a GPS track and create a spatial line geometry.  I have put the whole process into a packaged procedure strava_pkg.load_activity.

First I need to work out which version of the Topographix schema is in use.  So I can try extracting the creator name with each and see which is not null.

…
IF l_num_rows > 0 THEN
  UPDATE activities
  SET    gpx = XMLTYPE(l_gpx), geom = null, geom_27700 = null, num_pts = 0, xmlns = NULL
  WHERE  activity_id = p_activity_id
  RETURNING extractvalue(gpx,'/gpx/@version', 'xmlns="http://www.topografix.com/GPX/1/0"') 
  ,         extractvalue(gpx,'/gpx/@version', 'xmlns="http://www.topografix.com/GPX/1/1"') 
  INTO      l_xmlns0, l_xmlns1;
  l_num_rows := SQL%rowcount;
END IF;
…

Now I can extract all the points in a GPX as a set of rows and put them into a spatial geometry.  I turn each row with two coordinates into two rows with one point each.  Note that longitude is listed before latitude for each point.  I convert the rows into a list using multiset() and finally cast that as a spatial ordinate array. 

Note that the SDO_GTYPE is 2002 (rather than 2001) because it is a line (rather than a single point) on a two-dimensional coordinate system.

  BEGIN
    UPDATE activities a
    SET geom = mdsys.sdo_geometry(2002,4326,null,mdsys.sdo_elem_info_array(1,2,1),
    cast(multiset(
      select CASE n.rn WHEN 1 THEN pt.lng WHEN 2 THEN pt.lat END ord
      from (
        SELECT rownum rn
        ,      TO_NUMBER(EXTRACTVALUE(VALUE(t), 'trkpt/@lon')) as lng
        ,      TO_NUMBER(EXTRACTVALUE(VALUE(t), 'trkpt/@lat')) as lat
        FROM   TABLE(XMLSEQUENCE(extract(a.gpx,'/gpx/trk/trkseg/trkpt', 'xmlns="http://www.topografix.com/GPX/1/1"'))) t
        ) pt,
        (select 1 rn from dual union all select 2 from dual) n
	    order by pt.rn, n.rn
      ) AS mdsys.sdo_ordinate_array))
    , xmlns = 'xmlns="http://www.topografix.com/GPX/1/1"'
    WHERE  a.gpx IS NOT NULL
    And    activity_id = p_activity_id;
    l_num_rows := SQL%rowcount;
  EXCEPTION
    WHEN e_13034 OR e_29877 THEN 
	  dbms_output.put_line('Exception:'||sqlerrm);
	  l_num_rows := 0;
  END;

I have found it helpful to simplify the line geometry with sdo_util.simplify(). It removes some of the noise in the GPS data and has resolved problems with calculating the length of lines that intersect with areas.

  BEGIN
    UPDATE activities 
    SET    geom = sdo_util.simplify(geom,1)
    WHERE  geom IS NOT NULL
    And    activity_id = p_activity_id;
    l_num_rows := SQL%rowcount;
  EXCEPTION
    WHEN e_13034 THEN 
	  dbms_output.put_line('Exception:'||sqlerrm);
  END;

There are a few other fields I also update at this point.  You will see me use them later.

  • NUM_PTS is the number of points in the line geometry.  
  • GEOM_27700 is the result of converting the line to British National Grid reference coordinates.  This helps when comparing it to British boundary data obtained from the Ordnance Survey or other government agencies.
  • MBR is the minimum bounding rectangle for the line.  This is generated to enable me to improve the performance of some spatial queries.  I have found some of the spatial operators to calculate intersections between geometries are quite slow and CPU intensive when applied to GPS tracks and boundary data that both have lots of points.  SDO_GEOM.SDO_MBR simply returns 4 ordinates that define the bounding rectangle.  This can be used to roughly match geometries that might match before doing a proper match.

  UPDATE activities 
  SET    num_pts = SDO_UTIL.GETNUMVERTICES(geom)
  ,      geom_27700 = sdo_cs.transform(geom,27700)
  ,      mbr = sdo_geom.sdo_mbr(geom)
  WHERE  geom IS NOT NULL
  And    activity_id = p_activity_id
  RETURNING num_pts INTO l_num_pts;
  dbms_output.put_line('Activity ID:'||p_activity_id||', '||l_num_pts||' points');
…

Now I can load each GPX and process it into a spatial geometry in one step.  I can process all of the activities in a simple loop.

set serveroutput on timi on
exec strava_pkg.load_activity(4468006769);
Loading Activity: 4468006769
ACTIVITIES/4468006769.gpx.gz - 1286238 bytes
xmlns 1=StravaGPX Android
Activity ID:4468006769, 998 points

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.41
Now my Strava activities are all in spatial geometries and I can start to do some spatial processing.

Wednesday, February 03, 2021

Spatial Data 1: Loading GPX data into XML data types

This blog is part of a series about my first steps using Spatial Data in the Oracle database.  I am using the GPS data for my cycling activities collected by Strava.

In these posts I have only shown extracts of some of the scripts I have written.  The full files are available on github.

Upload and Expand Strava Bulk Export

Strava will bulk export all your data to a zipped folder.  It contains various CSV files.  I am interested in activities.csv that contains a row for each activity with various pieces of data including the name of the data file that can be found in the /activities directory.  That file will usually be a .gpx, or it may be zipped as a .gpx.gz file.  GPX is an XML schema that contains sets of longitude/latitude coordinates and may contain other attributes.  

The first job is to upload the Strava export .zip file to somewhere accessible to the database server (in my case /vagrant) and to expand it (to /tmp/strava/).

cd /vagrant
mkdir /tmp/strava
unzip /vagrant/export_1679301.zip -d /tmp/strava

Create Strava Schema 

I need to create a new database schema to hold the various objects I will create, and I have to give it certain privileges.
connect / as sysdba
create user strava identified by strava;
grant connect, resource to strava;
grant create view to strava;
grant select_catalog_role to strava;
grant XDBADMIN to STRAVA;
grant alter session to STRAVA;
alter user strava quota unlimited on users;
alter user strava default tablespace users;

GRANT CREATE ANY DIRECTORY TO strava;
CREATE OR REPLACE DIRECTORY strava as '/tmp/strava';
CREATE OR REPLACE DIRECTORY activities as '/tmp/strava/activities';
CREATE OR REPLACE DIRECTORY exec_dir AS '/usr/bin';

GRANT READ, EXECUTE ON DIRECTORY exec_dir TO strava;
GRANT READ, EXECUTE ON DIRECTORY strava TO strava;
GRANT READ ON DIRECTORY activities TO strava;
  • I need to create database directories for both the CSV files in /tmp/strava and the various GPX files in the /tmp/strava/activities sub-directory.  I will need read privilege on both directories, and also execute privilege on the strava directory so that I can use a pre-processor script.
  • The exec_dir directory points to /usr/bin where the zip executables are located.  I need read and execute privilege on this so I can read directly from zipped files.
  • XDBADMIN: "Allows the grantee to register an XML schema globally, as opposed to registering it for use or access only by its owner. It also lets the grantee bypass access control list (ACL) checks when accessing Oracle XML DB Repository".

Import CSV file via an External Table

I will start by creating an external table to read the Strava activities.csv file, and then copy it into a database table.  This file is a simple comma-separated variable file.  The activity date, name and description are enclosed in double-quotes.  
The first problem that I encountered was that some of the descriptions I typed into Strava contain newline characters and the external table interprets them as the end of the record even though these characters are inside the double-quotes.
4380927517,"23 Nov 2020, 18:03:54",Zwift Crash Recovery,Virtual Ride,"Zwift Crash Recovery
1. recover fit file per https://zwiftinsider.com/retrieve-lost-ride/, 
2. fix corrupt .fit file with https://www.fitfiletools.com",1648,13.48,,false,Other,activities/4682540615.gpx.gz,,10.0,1648.0,1648.0,13480.2001953125,13.199999809265137,
8.179733276367188,91.0,36.20000076293945,12.600000381469727,69.5999984741211,7.099999904632568,0.40652215480804443,,,84.0,62.1943244934082,
,,,150.66201782226562,276.8444519042969,,,,,,,,,,,,158.0,1649.0,,,0.0,,1.0,,,,,,,,,,,,,,,,4907360.0,,,,,,,,,,,
As Chris Saxon points out on AskTom, it is necessary to pre-process the records to replace the newline characters with something else.  I found this awk script to process the record.  So I put it into a shell script nlfix.sh, made it executable and invoked as a pre-processor in the external table definition.
#nlfix.sh
/usr/bin/gawk -v RS='"' 'NR % 2 == 0 { gsub(/\n/, "") } { printf("%s%s", $0, RT) }' $*
nlfix.sh
  • Note the full path for gawk is specified.
A database directory is needed for the location of the pre-processor scripts and it is necessary to grant read and execute privileges on it.  I simply put the pre-processor in the same directory as the CSV file so I could use the same strava directory I created earlier.
GRANT READ, EXECUTE ON DIRECTORY strava TO strava;
Now I can define an external table that will read the activities.csv file. 
CREATE TABLE strava.activities_ext
(Activity_ID NUMBER
,Activity_Date DATE
,Activity_Name VARCHAR2(100)
,Activity_Type VARCHAR2(15)
,Activity_Description VARCHAR2(200)
,Elapsed_Time NUMBER
,Distance_km NUMBER
…)
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER
 DEFAULT DIRECTORY strava
 ACCESS PARAMETERS 
 (RECORDS DELIMITED BY newline 
  SKIP 1
  DISABLE_DIRECTORY_LINK_CHECK
  PREPROCESSOR strava:'nlfix.sh' 
  FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' RTRIM
  MISSING FIELD VALUES ARE NULL
  REJECT ROWS WITH ALL NULL FIELDS
  NULLIF = BLANKS
(Activity_ID,Activity_Date date "DD Mon yyyy,HH24:mi:ss"
,Activity_Name,Activity_Type,Activity_Description
,Elapsed_Time,Distance_km
…))
LOCATION ('activities.csv')
) REJECT LIMIT 5
/

Import Activities

Now I can simply copy from the external table to a regular table.  I have omitted a lot of columns that Strava does not populate (at least not in my export) but that appear in the CSV file.
rem 1b_create_activities_ext.sql
spool 1b_create_activities_ext 

CREATE TABLE strava.activities AS
select ACTIVITY_ID,ACTIVITY_DATE,ACTIVITY_NAME,ACTIVITY_TYPE,ACTIVITY_DESCRIPTION,
ELAPSED_TIME,DISTANCE_KM,RELATIVE_EFFORT,COMMUTE_CHAR,ACTIVITY_GEAR,
FILENAME,
ATHLETE_WEIGHT,BIKE_WEIGHT,ELAPSED_TIME2,MOVING_TIME,DISTANCE_M,MAX_SPEED,AVERAGE_SPEED,
ELEVATION_GAIN,ELEVATION_LOSS,ELEVATION_LOW,ELEVATION_HIGH,MAX_GRADE,AVERAGE_GRADE,
--AVERAGE_POSITIVE_GRADE,AVERAGE_NEGATIVE_GRADE,
MAX_CADENCE,AVERAGE_CADENCE,
--MAX_HEART_RATE,
AVERAGE_HEART_RATE,
--MAX_WATTS,
AVERAGE_WATTS,CALORIES,
--MAX_TEMPERATURE,AVERAGE_TEMPERATURE,
RELATIVE_EFFORT2,
TOTAL_WORK,
--NUMBER_OF_RUNS,
--UPHILL_TIME,DOWNHILL_TIME,OTHER_TIME,
PERCEIVED_EXERTION,
--TYPE,
--START_TIME,
WEIGHTED_AVERAGE_POWER,POWER_COUNT,
PREFER_PERCEIVED_EXERTION,PERCEIVED_RELATIVE_EFFORT,
COMMUTE,
--TOTAL_WEIGHT_LIFTED,
FROM_UPLOAD,
GRADE_ADJUSTED_DISTANCE,
--WEATHER_OBSERVATION_TIME,WEATHER_CONDITION,
--WEATHER_TEMPERATURE,APPARENT_TEMPERATURE,
--DEWPOINT,HUMIDITY,WEATHER_PRESSURE,
--WIND_SPEED,WIND_GUST,WIND_BEARING,
--PRECIPITATION_INTENSITY,
--SUNRISE_TIME,SUNSET_TIME,MOON_PHASE,
BIKE
--GEAR,
--PRECIPITATION_PROBABILITY,PRECIPITATION_TYPE,
--CLOUD_COVER,WEATHER_VISIBILITY,UV_INDEX,WEATHER_OZONE,
--JUMP_COUNT,TOTAL_GRIT,AVG_FLOW,
--FLAGGED
FROM strava.activities_ext
/

ALTER TABLE activities ADD CONSTRAINT activities_pk PRIMARY KEY (activity_id);
…
ALTER TABLE activities ADD (gpx XMLTYPE) XMLTYPE COLUMN gpx STORE AS SECUREFILE BINARY XML (CACHE DISABLE STORAGE IN ROW);
ALTER TABLE activities ADD (geom mdsys.sdo_geometry));
ALTER TABLE activities ADD (geom_27700 mdsys.sdo_geometry));
ALTER TABLE activities ADD (mbr mdsys.sdo_geometry));
ALTER TABLE activities ADD (xmlns VARCHAR2(128));
ALTER TABLE activities ADD (num_pts INTEGER DEFAULT 0);

Spool off
  • I have specified a primary key on activity_id and made a number of other columns not nullable.
  • I have added a new XMLTYPE column GPX into which I will load the GPS data in the .gpx files.  

FIT files

Some applications, such as Garmin and Rouvy generate compressed .fit files, and Strava exports them again (apparently if it can't convert them, although it can convert the .fit files from Zwift to .gpx).  These are binary files, and since I only have a few of them, I have converted them to .gpx files using GPSBabel on my laptop, and then I reuploaded the .gpx files.
for %i in (*.fit.gz) do "C:\Program Files\GnuWin\bin\gzip" -fd %i
for %i in (*.fit) do "C:\Program Files (x86)\GPSBabel\GPSBabel.exe" -i garmin_fit -f "%i" -o gpx -F "%~ni".gpx
I then update the file name in the activities table.
UPDATE activities
SET filename = REPLACE(filename,'.fit.gz','.gpx')
WHERE filename like '%.fit.gz'
/

Compress GPX files (optional)

Some of the GPX files in the Strava export are compressed and some are not.  There is no obvious reason why.  To minimise the space I can gzip the GPX files.
gzip -9v /tmp/strava/activities/*.gpx
If I do compress any .gpx files, then I also need to update the file names in the activities table.
UPDATE activities
Set filename = filename||'.gz'
Where filename like '%.gpx'
/

Load the GPX files into the XML data type.

The next stage is to load each of the GPX files into the activities table.  
create or replace package body strava_pkg as 
k_module      CONSTANT VARCHAR2(48) := $$PLSQL_UNIT;
…
----------------------------------------------------------------------------------------------------
function getClobDocument
(p_directory IN VARCHAR2
,p_filename  IN VARCHAR2
,p_charset   IN VARCHAR2 DEFAULT NULL
) return        CLOB deterministic
is
  l_module VARCHAR2(64); 
  l_action VARCHAR2(64);

  v_filename      VARCHAR2(128);
  v_directory     VARCHAR2(128);
  v_file          bfile;
  v_unzipped      blob := empty_blob();

  v_Content       CLOB := ' ';
  v_src_offset    number := 1 ;
  v_dst_offset    number := 1 ;
  v_charset_id    number := 0;
  v_lang_ctx      number := DBMS_LOB.default_lang_ctx;
  v_warning       number;

  e_22288 EXCEPTION; --file or LOB operation FILEOPEN failed
  PRAGMA EXCEPTION_INIT(e_22288, -22288);
BEGIN
  dbms_application_info.read_module(module_name=>l_module
                                   ,action_name=>l_action);
  dbms_application_info.set_module(module_name=>k_module
                                  ,action_name=>'getClobDocument');

  IF p_charset IS NOT NULL THEN
    v_charset_id := NLS_CHARSET_ID(p_charset);
  END IF;

  v_filename  := REGEXP_SUBSTR(p_filename,'[^\/]+',1,2);
  v_directory := REGEXP_SUBSTR(p_filename,'[^\/]+',1,1);

  IF v_directory IS NOT NULL and v_filename IS NULL THEN /*if only one parameters then it is actually a filename*/
    v_filename := v_directory; 
    v_directory := '';
  END IF;

  IF p_directory IS NOT NULL THEN
    v_directory := p_directory;
  END IF;

  v_File := bfilename(UPPER(v_directory),v_filename);

  BEGIN
    DBMS_LOB.fileopen(v_File, DBMS_LOB.file_readonly);
  exception 
    when VALUE_ERROR OR e_22288 then
      dbms_output.put_line('Can''t open:'||v_directory||'/'||v_filename||' - '||v_dst_offset||' bytes');
      v_content := '';
      dbms_application_info.set_module(module_name=>l_module
                                      ,action_name=>l_action);
      return v_content;
  END;

  IF v_filename LIKE '%.gz' THEN
    v_unzipped := utl_compress.lz_uncompress(v_file);
    dbms_lob.converttoclob(
      dest_lob     => v_content,
      src_blob     => v_unzipped,
      amount       => DBMS_LOB.LOBMAXSIZE, 
      dest_offset  => v_dst_offset,
      src_offset   => v_src_offset,
      blob_csid    => dbms_lob.default_csid,
      lang_context => v_lang_ctx,
      warning      => v_warning);
  ELSE --ELSIF v_filename LIKE '%.g__' THEN
    DBMS_LOB.LOADCLOBFROMFILE(v_Content, 
      Src_bfile    => v_File,
      amount       => DBMS_LOB.LOBMAXSIZE, 
      src_offset   => v_src_offset, 
      dest_offset  => v_dst_offset,
      bfile_csid   => v_charset_id, 
      lang_context => v_lang_ctx,
      warning => v_warning);
  END IF;

  dbms_output.put_line(v_directory||'/'||v_filename||' - '||v_dst_offset||' bytes');
  DBMS_LOB.fileclose(v_File);

  dbms_application_info.set_module(module_name=>l_module
                                  ,action_name=>l_action);

  return v_Content;
exception when others then
  dbms_output.put_line(v_directory||'/'||v_filename||' - '||v_dst_offset||' bytes');
  DBMS_LOB.fileclose(v_File);
  dbms_application_info.set_module(module_name=>l_module
                                  ,action_name=>l_action);
  raise;
end getClobDocument;
----------------------------------------------------------------------------------------------------
…
END strava_pkg;
/
I can simply query the contents of the uncompressed GPX file in SQL by calling the function.  In this case, the zipped .gpx file is 65K but decompresses to 1.2Mb.
Set long 1000 lines 200 pages 99 serveroutput on
Column filename  format a30
Column gpx format a100
select activity_id, filename
, getClobDocument('',filename) gpx
from activities
where filename like '%.gpx%'
And activity_id = 4468006769
order by 1
/

ACTIVITY_ID FILENAME                       GPX
----------- ------------------------------ ----------------------------------------------------------------------------------------------------
 4468006769 activities/4468006769.gpx.gz   <?xml version="1.0" encoding="UTF-8"?>
                                           <gpx creator="StravaGPX Android" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLoc
                                           ation="http://www.topografix.com/GPX/1/1 http://www.topografix.com/GPX/1/1/gpx.xsd http://www.garmin
                                           .com/xmlschemas/GpxExtensions/v3 http://www.garmin.com/xmlschemas/GpxExtensionsv3.xsd http://www.gar
                                           min.com/xmlschemas/TrackPointExtension/v1 http://www.garmin.com/xmlschemas/TrackPointExtensionv1.xsd
                                           " version="1.1" xmlns="http://www.topografix.com/GPX/1/1" xmlns:gpxtpx="http://www.garmin.com/xmlsch
                                           emas/TrackPointExtension/v1" xmlns:gpxx="http://www.garmin.com/xmlschemas/GpxExtensions/v3">
                                            <metadata>
                                             <time>2020-12-13T14:31:13Z</time>
                                            </metadata>
                                            <trk>
                                             <name>Loop</name>
                                             <type>1</type>
                                             <trkseg>
                                              <trkpt lat="51.5296380" lon="-0.1875360">
                                               <ele>30.6</ele>
                                               <time>2020-12-13T14:31:13Z</time>
                                               <extensions>
                                                <gpxtpx:TrackPointExtension>
                                                 <gpxtpx:hr>57</gpxtpx:hr>
                                                </gpxtpx:TrackPointExtension>
                                               </extensions>
                                              </trkpt>
…

activities/4468006769.gpx.gz - 1286238
Elapsed: 00:00:00.14
I can load the .gpx files into the GPX column of the activities table with a simple update statement.  The CLOB returned from the function is converted to an XML with XMLTYPE.
UPDATE activities
SET gpx = XMLTYPE(getClobDocument('ACTIVITIES',filename))
WHERE filename like '%.gpx%'
/
I can now query back the same GPX from the database.
Set long 1100 lines 200 pages 99 serveroutput on
select activity_id, filename, gpx
from activities
where filename like '%.gpx%'
And activity_id = 4468006769
order by 1
/

ACTIVITY_ID FILENAME                       GPX
----------- ------------------------------ ----------------------------------------------------------------------------------------------------
 4468006769 activities/4468006769.gpx.gz   <?xml version="1.0" encoding="US-ASCII"?>
                                           <gpx creator="StravaGPX Android" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLoc
                                           ation="http://www.topografix.com/GPX/1/1 http://www.topografix.com/GPX/1/1/gpx.xsd http://www.garmin
                                           .com/xmlschemas/GpxExtensions/v3 http://www.garmin.com/xmlschemas/GpxExtensionsv3.xsd http://www.gar
                                           min.com/xmlschemas/TrackPointExtension/v1 http://www.garmin.com/xmlschemas/TrackPointExtensionv1.xsd
                                           " version="1.1" xmlns="http://www.topografix.com/GPX/1/1" xmlns:gpxtpx="http://www.garmin.com/xmlsch
                                           emas/TrackPointExtension/v1" xmlns:gpxx="http://www.garmin.com/xmlschemas/GpxExtensions/v3">
                                             <metadata>
                                               <time>2020-12-13T14:31:13Z</time>
                                             </metadata>
                                             <trk>
                                               <name>Loop</name>
                                               <type>1</type>
                                               <trkseg>
                                                 <trkpt lat="51.5296380" lon="-0.1875360">
                                                   <ele>30.6</ele>
                                                   <time>2020-12-13T14:31:13Z</time>
                                                   <extensions>
                                                     <gpxtpx:TrackPointExtension>
                                                       <gpxtpx:hr>57</gpxtpx:hr>
                                                     </gpxtpx:TrackPointExtension>
                                                   </extensions>
                                                 </trkpt>
                                                 <trkpt lat="51.5296350" lon="-0.1875340">
…

First Steps in Spatial Data

This is the introductory blog post in a series about using Spatial Data in the Oracle database.

Caveat: Spatial Data has been a part of the Oracle database since at least version 8i.  I have been aware of it for many years, but have never previously used it myself.  Recently, I have recently had some spare time and decided to experiment with it.  These blogs document my first steps.  I have spent a lot of time reading the documentation and using Google to find other people's blogs.  Where I found useful material I have provided links to it.  It is likely that more experienced developers can point out my mistakes, and better methods to achieve results.  In which case, I will gladly publish comments and make corrections to my material.

Index

  1. Loading GPX data into XML data types
  2. Convert GPX Track to a Spatial Line Geometry

Problem Statement

A map reading stop!
When I am not working with Oracle databases, I am a keen cyclist and I ride with a touring club.  I have also always enjoyed maps having been taught to read Ordnance Survey maps at school.  It is no surprise therefore that I lead rides for my cycling club.  We used to use (and you can still buy) paper maps.  By 2005 I was starting to use a GPS. Initially, I recorded rides as tracks on PDA.  By 2012, I was regularly using an android tablet on my handlebar bag for navigation.   The market has caught up and people now attach their phones to their handlebars or have dedicated bike computers with GPS and Bluetooth links to their phones.  The cycling club website includes a library of the routes of previous rides, however, you can only search that by the structured data held for that ride.  So, for example, I can only search for rides in the Chilterns if that word appears in the description.  I cannot do a spatial search.

I have also started to use Strava, an internet service for tracking exercise.  It is mainly used by cyclists and runners.  Activities can be recorded on a phone or other device and then be uploaded, compared and analysed.  Every time I go out on the bike I upload the activity.  I also uploaded my back catalogue of GPS data.  As a result of the Coronavirus lockdowns, I bought an indoor trainer by that I use with Zwift and that also posts data to Strava.  My most recent toy is a heart monitor.  Both Strava and Zwift also capture data from that.  Strava will let you see a certain amount of analysis about your activities and how you compare to other people, and more if you pay for their subscription service.  They will also allow you to export and download all of your data as a set of structured data in CSV files, and also the GPX files and photographs that you uploaded.  

I thought it would be interesting to try to analyse and interrogate that data.  Typical questions might include:

  1. I ride up Swain's Lane in Highgate most days.  How long do I take, and am I getting faster or slower?
  2. I want to go for a ride in the Chilterns, so I would like to see tracks of previous rides to get some route ideas.

So I am going to upload my Strava data into an Oracle database, load the GPS tracks currently in GPX files into the database, convert them to Spatial geometries, and then process them.  To answer the first question I will need to provide a working definition of Swain's Lane.  For the second, I need definitions of various areas.  For example, I will take the Chilterns to be the area designed by Natural England as an Area of Outstanding Natural Beauty.  So I will need to import a definition of that and other areas from published data.

The following series of blogs illustrate how I dealt with these and other challenges.

Monday, November 23, 2020

Partition Change Tracking During Materialized View Refresh and Query Rewrite

This article discusses the interplay of Partitioning, Partition Change Tracking and Query Rewrite in relation to Materialized Views.

Contents

Introduction

In the Oracle database, Materialized Views can be used to create pre-generated reporting tables.  A view of the data based on a SQL query is materialized into a table.  That query may restrict the rows and columns and may aggregate the data.  An application can reference the materialized view directly, or the Oracle database can 'rewrite' SQL queries on the original tables that are similar to query in a materialized view to use that materialized view instead.  
By default, QUERY_REWRITE_INTEGRITY is enforced, which means Query rewrite works only with materialized views that are up to date (i.e. the underlying data hasn't changed since the materialized view was last refreshed).  This note deals with that scenario.  Optionally, rewrite integrity can be configured to allow rewrite to occur on stale materialized views (this is called 'stale tolerated').  It can be set at system or session-level.
Partition Change Tracking (PCT) is 'a way of tracking the staleness of a materialized view on the partition and subpartition level'.  If both the materialized view and at least one underlying table in the view are similarly partitioned, then Oracle can determine the relationship between partitions and subpartitions in the underlying table and those in the materialized view.  The database can track not just whether any partition in the underlying tables has been updated since the last refresh of the materialized view, but which ones. During SQL parse, if after partition pruning of the query on the underlying tables, none of the remaining partitions are stale then the query can still be rewritten.  Also, it is possible to refresh just the stale partitions in the materialized view, those that correspond to the underlying partitions that have been updated since the last refresh.
Query rewrite is a cost-based SQL transformation.  Therefore, it will only occur if the optimizer calculates that the rewritten query has a lower cost.  If I refresh the materialized view in non-atomic mode, then the materialized view will be truncated and populated in direct-path mode, thus the data can be compressed (either with basic compression, or Hybrid-Columnar Compression if on an engineered platform) without the need of the Advanced Compression Licence.  This will further reduce the size and cost of accessing the materialized view and increase the likelihood of query rewrite.
I have written a series of blogs about retrofitting partitioning into existing applications.  One of my examples was based on PeopleSoft General Ledger reporting in which I discussed options for partitioning the ledger such that there is a different partition for each accounting period.  Once an accounting period is closed the application generally doesn't usually change it further.  It should be possible to create partitioned materialized views on the ledger table to support GL reporting using query rewrite.  As the application continues to insert data into the partition for the current accounting period, that partition will quickly become stale and queries on that partition won't be rewritten.  However, it is common for customers to run suites of reports overnight, and those could be run after a materialized view refresh and make good use of query rewrite. 
However, as I modelled this, I ran into a few problems that reveal some of the behaviour of PCT, query rewrite and materialized view refresh.  I have created a number of test scripts that illustrate various scenarios that I will describe below.  The full scripts are available on Github.

Documented Preconditions and Limitations

Oracle's documentation sets out a number of preconditions for PCT.
  • Partitioned tables must use either range, list or composite partitioning with range or list as the top-level partitioning strategy. - Therefore, hash partitioning is not supported.  What about interval partitioning?  See demonstration 3.
  • The top-level partition key must consist of only a single column. - If, as I proposed, the ledger table is range partitioned on the combination FISCAL_YEAR and ACCOUNTING_PERIOD then PCT will not work (see demonstration 1: Multi-column composite partitioning).  So, are other partitioning strategies viable?
  • The materialized view must contain either the partition key column or a partition marker or ROWID or join dependent expression of the detail table.
  • If you use a GROUP BY clause, the partition key column or the partition marker or ROWID or join dependent expression must be present in the GROUP BY clause.
Note that, while partition change tracking tracks the staleness on a partition and subpartition level (for composite partitioned tables), the level of granularity for PCT refresh is only the top-level partitioning strategy. Consequently, any change to data in one of the subpartitions of a composite partitioned-table will only mark the single impacted subpartition as stale and have the rest of the table available for rewrite, but the PCT refresh will refresh the whole partition that contains the impacted subpartition.

Demonstrations

In each of the following demonstrations, I will create a copy of the PeopleSoft Financials General Ledger table PS_LEDGER, populate it with random data to simulate 2½ years of actuals and 4 years of budget data.  The table will be partitioned differently in each demonstration.  I will also create one or two materialized views that will also be partitioned.  Then I will add data for another accounting period and look at how the materialized view refresh and query rewrite behave when one partition is stale.
The tests have been run on Oracle 19.9.  Query rewrite is enabled, and rewrite integrity is enforced.
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
query_rewrite_enabled                string      TRUE
query_rewrite_integrity              string      enforced


Demonstration 1: Multi-column composite partitioning

I will start with my usual composite partitioning of the ledger table on the combination of FISCAL_YEAR and ACCOUNTING_PERIOD to permit sub-partitioning on LEDGER.
CREATE TABLE ps_ledger
(business_unit VARCHAR2(5) NOT NULL
,ledger VARCHAR2(10) NOT NULL
,account VARCHAR2(10) NOT NULL
…
) PCTFREE 10 PCTUSED 80
PARTITION BY RANGE (FISCAL_YEAR,ACCOUNTING_PERIOD) 
SUBPARTITION BY LIST (LEDGER)
SUBPARTITION TEMPLATE
(SUBPARTITION actuals  VALUES ('ACTUALS')
,SUBPARTITION budget   VALUES ('BUDGET'))
(PARTITION ledger_2018 VALUES LESS THAN (2019,0) PCTFREE 0 COMPRESS
--
,PARTITION ledger_2019_bf VALUES LESS THAN (2019,1) PCTFREE 0 COMPRESS 
,PARTITION ledger_2019_01 VALUES LESS THAN (2019,2) PCTFREE 0 COMPRESS 
…
,PARTITION ledger_2019_12 VALUES LESS THAN (2019,13) PCTFREE 0 COMPRESS
,PARTITION ledger_2019_cf VALUES LESS THAN (2020,0) PCTFREE 0 COMPRESS
--
,PARTITION ledger_2020_bf VALUES LESS THAN (2020,1)
,PARTITION ledger_2020_01 VALUES LESS THAN (2020,2) 
…
,PARTITION ledger_2020_12 VALUES LESS THAN (2020,13)
,PARTITION ledger_2020_cf VALUES LESS THAN (2021,0)
--
,PARTITION ledger_2021_bf VALUES LESS THAN (2021,1)
,PARTITION ledger_2021_01 VALUES LESS THAN (2021,2)
…
,PARTITION ledger_2021_12 VALUES LESS THAN (2021,13)
,PARTITION ledger_2021_cf VALUES LESS THAN (2022,0)
)
ENABLE ROW MOVEMENT 
NOPARALLEL NOLOGGING
/
@treeselectors
@popledger
I will also create the tree selector tables used as dimension tables in the nVision General Ledger Reports
REM treeselectors.sql 
CREATE TABLE PSTREESELECT05
(SELECTOR_NUM INTEGER NOT NULL,
 TREE_NODE_NUM INTEGER NOT NULL,
 RANGE_FROM_05 VARCHAR2(05) NOT NULL,
 RANGE_TO_05   VARCHAR2(05) NOT NULL)
 PARTITION BY RANGE (SELECTOR_NUM) INTERVAL (1)
 (PARTITION pstreeselector VALUES LESS THAN (2))
 NOPARALLEL NOLOGGING;
CREATE UNIQUE INDEX PS_PSTREESELECT05 ON PSTREESELECT05 (SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_05);

CREATE TABLE PSTREESELECT10 
(SELECTOR_NUM INTEGER NOT NULL,
 TREE_NODE_NUM INTEGER NOT NULL,
 RANGE_FROM_10 VARCHAR2(10) NOT NULL,
 RANGE_TO_10   VARCHAR2(10) NOT NULL)
 PARTITION BY RANGE (SELECTOR_NUM) INTERVAL (1)
 (PARTITION pstreeselector VALUES LESS THAN (2))
 NOPARALLEL NOLOGGING;
CREATE UNIQUE INDEX PS_PSTREESELECT10 ON PSTREESELECT10 (SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10);

exec dbms_stats.set_table_prefs('SCOTT','PSTREESELECT05','GRANULARITY','ALL');
exec dbms_stats.set_table_prefs('SCOTT','PSTREESELECT10','GRANULARITY','ALL');
exec dbms_stats.set_table_prefs('SCOTT','PSTREESELECT05','METHOD_OPT'-
    ,'FOR ALL COLUMNS SIZE 1, FOR COLUMNS SELECTOR_NUM, (SELECTOR_NUM, TREE_NODE_NUM) SIZE 254');
exec dbms_stats.set_table_prefs('SCOTT','PSTREESELECT10','METHOD_OPT'-
    ,'FOR ALL COLUMNS SIZE 1, FOR COLUMNS SELECTOR_NUM, (SELECTOR_NUM, TREE_NODE_NUM) SIZE 254');
And then I will populate and collect statistics on the ledger with randomised, but skewed, data to simulate 
  • actuals data from fiscal year 2018 to period 6 of 2020
  • budget data from fiscal year 2018 to 2021 that is 10% of the size of the actuals data. 
Some typical indexes will be built on the ledger table. 
The tree selector tables will be populated with data corresponding to the ledger data:
  • the business unit tree will have both business units,
  • the account tree will have 25% of the 999 accounts,
  • the chartfield tree will have 10% of the 999 chartfields. 
Statistics preferences will be defined so that statistics will be collected at all table, partition and subpartition levels on all these tables. There will only be histograms on a few low cardinality columns.
REM popledger.sql
set autotrace off echo on pages 99 lines 200 trimspool on
truncate table ps_ledger;
exec dbms_stats.set_table_prefs('SCOTT','PS_LEDGER','METHOD_OPT'-
    ,'FOR ALL COLUMNS SIZE 1, FOR COLUMNS FISCAL_YEAR, ACCOUNTING_PERIOD, LEDGER, BUSINESS_UNIT SIZE 254');
exec dbms_stats.set_table_prefs('SCOTT','PS_LEDGER','GRANULARITY','ALL');
ALTER TABLE PS_LEDGER PARALLEL 8 NOLOGGING;

CREATE /*UNIQUE*/ INDEX ps_ledger ON ps_ledger
(business_unit, ledger, account, deptid
,product, fund_code, class_fld, affiliate
,chartfield2, project_id, book_code, gl_adjust_type
,currency_cd, statistics_code, fiscal_year, accounting_period
) COMPRESS 2 PARALLEL
/
INSERT /*+APPEND PARALLEL ENABLE_PARALLEL_DML NO_GATHER_OPTIMIZER_STATISTICS*//*IGNORE_ROW_ON_DUPKEY_INDEX(PS_LEDGER)*/ 
INTO ps_ledger
with n as (
SELECT rownum n from dual connect by level <= 1e2
), fy as (
SELECT 2017+rownum fiscal_year FROM dual CONNECT BY level <= 4
), ap as (
SELECT FLOOR(dbms_random.value(0,13)) accounting_period FROM dual connect by level <= 998
UNION ALL SELECT 998 FROM DUAL CONNECT BY LEVEL <= 1
UNION ALL SELECT 999 FROM DUAL CONNECT BY LEVEL <= 1
), l as (
SELECT 'ACTUALS' ledger FROM DUAL CONNECT BY LEVEL <= 10
UNION ALL SELECT 'BUDGET' FROM DUAL
)
select 'BU'||LTRIM(TO_CHAR(CASE WHEN dbms_random.value <= .9 THEN 1 ELSE 2 END,'000')) business_unit 
,      l.ledger
,      'ACC'||LTRIM(TO_CHAR(999*SQRT(dbms_random.value),'000')) account 
,      'ALTACCT'||LTRIM(TO_CHAR(999*dbms_random.value,'000')) altacct
,      'DEPT'||LTRIM(TO_CHAR(9999*dbms_random.value,'0000')) deptid
,      'OPUNIT'||LTRIM(TO_CHAR(99*dbms_random.value,'00')) operating_unit
,      'P'||LTRIM(TO_CHAR(99999*dbms_random.value,'00000')) product 
,      'FUND'||LTRIM(TO_CHAR(9*dbms_random.value,'0')) fund_code
,      'CLAS'||LTRIM(TO_CHAR(9*dbms_random.value,'0')) class_fld
,      'PROD'||LTRIM(TO_CHAR(9*dbms_random.value,'0')) program_code
,      ' ' budget_ref
,      'AF'||LTRIM(TO_CHAR(999*dbms_random.value,'000')) affiliate 
,      'AFI'||LTRIM(TO_CHAR(99999*dbms_random.value,'00000')) affiliate_intra1
,      'AFI'||LTRIM(TO_CHAR( 9999*dbms_random.value,'0000')) affiliate_intra2
,      'CF'||LTRIM(TO_CHAR(  999*SQRT(dbms_random.value),'000')) chartfield1
,      'CF'||LTRIM(TO_CHAR(99999*dbms_random.value,'00000')) chartfield2
,      'CF'||LTRIM(TO_CHAR( 9999*dbms_random.value,'0000')) chartfield3
,      'PRJ'||LTRIM(TO_CHAR(9999*dbms_random.value,'0000')) project_id
,      'BK'||LTRIM(TO_CHAR(99*dbms_random.value,'00')) book_code
,      'GL'||LTRIM(TO_CHAR(99*dbms_random.value,'00')) gl_adjust_type
,      'GBP' currency_cd 
,      ' ' statistics_code 
,      fy.fiscal_year
,      ap.accounting_period
,      dbms_random.value(0,1e6) posted_total_amt 
,      0 posted_base_amt 
,      0 posted_tran_amt 
,      'GBP' base_currency 
,      SYSDATE dttm_stamp_sec
,      0 process_instance 
FROM   fy,ap, l, n
WHERE  l.ledger = 'BUDGET' or (fy.fiscal_year < 2020 or (fy.fiscal_year = 2020 AND ap.accounting_period <= 6))
/
commit;
exec dbms_stats.gather_table_stats('SCOTT','PS_LEDGER');

CREATE INDEX psxledger ON ps_ledger
(ledger, fiscal_year, accounting_period, business_unit, account, chartfield1
) LOCAL COMPRESS 4 PARALLEL
/
CREATE INDEX psyledger ON ps_ledger
(ledger, fiscal_year, business_unit, account, chartfield1, accounting_period
) LOCAL COMPRESS 3 PARALLEL
/
ALTER INDEX ps_ledger NOPARALLEL;
ALTER INDEX psxledger NOPARALLEL;
ALTER INDEX psyledger NOPARALLEL;

TRUNCATE TABLE PSTREESELECT05;
TRUNCATE TABLE PSTREESELECT10;
INSERT INTO PSTREESELECT05
WITH x as (SELECT DISTINCT business_unit FROM ps_ledger)
, y as (SELECT 30982, FLOOR(DBMS_RANDOM.value(1,1e10)) tree_node_num, business_unit FROM x)
select y.*, business_unit FROM y
/
INSERT INTO PSTREESELECT10
WITH x as (SELECT DISTINCT account FROM ps_ledger)
, y as (SELECT 30984, FLOOR(DBMS_RANDOM.value(1,1e10)) tree_node_num, account FROM x)
select y.*, account FROM y
where mod(tree_node_num,100)<25
/
INSERT INTO PSTREESELECT10
WITH x as (SELECT DISTINCT chartfield1 FROM ps_ledger)
, y as (SELECT 30985, FLOOR(DBMS_RANDOM.value(1,1e10)) tree_node_num, chartfield1 FROM x)
select y.*, chartfield1 FROM y
where mod(tree_node_num,100)<10
/
Per complete fiscal year, there are 1,000,000 actuals rows and 100,000 budget rows
LEDGER     FISCAL_YEAR   COUNT(*) MAX(ACCOUNTING_PERIOD)
---------- ----------- ---------- ----------------------
ACTUALS           2018    1000000                    999
                  2019    1000000                    999
                  2020     538408                      6

BUDGET            2018     100000                    999
                  2019     100000                    999
                  2020     100000                    999
                  2021     100000                    999

**********             ----------
sum                       2938408
There are about 77K rows per accounting period with just 1000 rows in periods 998 (adjustments), 999 (carry forward)
LEDGER     FISCAL_YEAR ACCOUNTING_PERIOD   COUNT(*)
---------- ----------- ----------------- ----------
…
ACTUALS           2019                 0      76841
                                       1      76410
                                       2      76867
                                       3      77088
                                       4      77740
                                       5      77010
                                       6      76650
                                       7      76553
                                       8      76923
                                       9      76586
                                      10      76276
                                      11      76943
                                      12      76113
                                     998       1000
                                     999       1000

********** ***********                   ----------
           sum                              1000000

ACTUALS           2020                 0      77308
                                       1      76696
                                       2      76944
                                       3      77227
                                       4      76944
                                       5      76524
                                       6      76765

********** ***********                   ----------
           sum                               538408
…
I will create two MVs each containing data for a single fiscal year; one for 2019 and one for 2020 I will only range partition the MV on accounting period. We don't need to partition it on FISCAL_YEAR since it only contains a single year.
CREATE MATERIALIZED VIEW mv_ledger_2019
PARTITION BY RANGE (ACCOUNTING_PERIOD)
(PARTITION ap_bf VALUES LESS THAN (1) 
,PARTITION ap_01 VALUES LESS THAN (2) 
,PARTITION ap_02 VALUES LESS THAN (3) 
,PARTITION ap_03 VALUES LESS THAN (4) 
,PARTITION ap_04 VALUES LESS THAN (5) 
,PARTITION ap_05 VALUES LESS THAN (6) 
,PARTITION ap_06 VALUES LESS THAN (7) 
,PARTITION ap_07 VALUES LESS THAN (8) 
,PARTITION ap_08 VALUES LESS THAN (9) 
,PARTITION ap_09 VALUES LESS THAN (10) 
,PARTITION ap_10 VALUES LESS THAN (11) 
,PARTITION ap_11 VALUES LESS THAN (12) 
,PARTITION ap_12 VALUES LESS THAN (13) 
,PARTITION ap_cf VALUES LESS THAN (MAXVALUE)
) PCTFREE 0 COMPRESS 
REFRESH COMPLETE ON DEMAND
ENABLE QUERY REWRITE AS
SELECT business_unit, account, chartfield1, fiscal_year, accounting_period,
sum(posted_total_amt) posted_total_amt
FROM ps_ledger
WHERE fiscal_year = 2019
AND   ledger = 'ACTUALS'
AND   currency_cd = 'GBP'
GROUP BY business_unit, account, chartfield1, fiscal_year, accounting_period
/
CREATE MATERIALIZED VIEW mv_ledger_2020
PARTITION BY RANGE (ACCOUNTING_PERIOD)
(PARTITION ap_bf VALUES LESS THAN (1) 
,PARTITION ap_01 VALUES LESS THAN (2) 
,PARTITION ap_02 VALUES LESS THAN (3) 
,PARTITION ap_03 VALUES LESS THAN (4) 
,PARTITION ap_04 VALUES LESS THAN (5) 
,PARTITION ap_05 VALUES LESS THAN (6) 
,PARTITION ap_06 VALUES LESS THAN (7) 
,PARTITION ap_07 VALUES LESS THAN (8) 
,PARTITION ap_08 VALUES LESS THAN (9) 
,PARTITION ap_09 VALUES LESS THAN (10) 
,PARTITION ap_10 VALUES LESS THAN (11) 
,PARTITION ap_11 VALUES LESS THAN (12) 
,PARTITION ap_12 VALUES LESS THAN (13) 
,PARTITION ap_cf VALUES LESS THAN (MAXVALUE)
) PCTFREE 0 COMPRESS 
REFRESH COMPLETE ON DEMAND
ENABLE QUERY REWRITE AS
SELECT business_unit, account, chartfield1, fiscal_year, accounting_period,
sum(posted_total_amt) posted_total_amt
FROM ps_ledger
WHERE fiscal_year = 2020
AND   ledger = 'ACTUALS'
AND   currency_cd = 'GBP'
GROUP BY business_unit, account, chartfield1, fiscal_year, accounting_period
/
@mvpop
@@mvpop
@@mvsql
@@pop2020m7
@@mvsql
@@mvtrc
@@mvvol
@@mvsql
@@mvcap
The materialized views are populated on creation, but I will explicitly collect statistics on them.
SELECT MVIEW_NAME, STALENESS, LAST_REFRESH_TYPE, COMPILE_STATE FROM USER_MVIEWS ORDER BY MVIEW_NAME;

ALTER MATERIALIZED VIEW mv_ledger_2019 NOPARALLEL;
exec dbms_stats.set_table_prefs('SCOTT','MV_LEDGER_2019','METHOD_OPT',-
    'FOR ALL COLUMNS SIZE 1, FOR COLUMNS FISCAL_YEAR, ACCOUNTING_PERIOD, BUSINESS_UNIT SIZE 254');
exec dbms_stats.set_table_prefs('SCOTT','MV_LEDGER_2019','GRANULARITY','ALL');

ALTER MATERIALIZED VIEW mv_ledger_2020 NOPARALLEL;
exec dbms_stats.set_table_prefs('SCOTT','MV_LEDGER_2020','METHOD_OPT',-
    'FOR ALL COLUMNS SIZE 1, FOR COLUMNS FISCAL_YEAR, ACCOUNTING_PERIOD, BUSINESS_UNIT SIZE 254');
exec dbms_stats.set_table_prefs('SCOTT','MV_LEDGER_2020','GRANULARITY','ALL');

exec dbms_stats.gather_table_stats('SCOTT','MV_LEDGER_2019');
exec dbms_stats.gather_table_stats('SCOTT','MV_LEDGER_2020');
Although I can do a full refresh of the MV, I cannot do a PCT refresh.
BEGIN dbms_mview.refresh(list=>'MV_LEDGER_2020',method=>'P',atomic_refresh=>FALSE); END;

*
ERROR at line 1:
ORA-12047: PCT FAST REFRESH cannot be used for materialized view "SCOTT"."MV_LEDGER_2020"
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 3020
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 2432
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 88
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 253
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 2413
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 2976
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 3263
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 3295
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 16
ORA-06512: at line 1
I can use EXPLAIN_MVIEW to check the status of the MV
REM mvcap.sql
create table MV_CAPABILITIES_TABLE
(
  statement_id      varchar(30) ,
  mvowner           varchar(30) ,
  mvname            varchar(30) ,
  capability_name   varchar(30) ,
  possible          character(1) ,
  related_text      varchar(2000) ,
  related_num       number ,
  msgno             integer ,
  msgtxt            varchar(2000) ,
  seq               number
) ;
 
truncate table MV_CAPABILITIES_TABLE;
EXECUTE DBMS_MVIEW.EXPLAIN_MVIEW ('SCOTT.MV_LEDGER_2019');
EXECUTE DBMS_MVIEW.EXPLAIN_MVIEW ('SCOTT.MV_LEDGER_2020');
break on mvname skip 1
column rel_text format a20
column msgtxt format a60
SELECT mvname, capability_name,  possible, SUBSTR(related_text,1,20) AS rel_text, SUBSTR(msgtxt,1,60) AS msgtxt
FROM MV_CAPABILITIES_TABLE
WHERE mvname like 'MV_LEDGER_20%'
ORDER BY mvname, seq;
EXPLAIN_MVIEW reports that general query rewrite is available but PCT and PCT query rewrite are not. Per the manual, Oracle simply cannot do a PCT refresh if the table has multi-column partitioning.
CAPABILITY_NAME                P REL_TEXT             MSGTXT
------------------------------ - -------------------- ------------------------------------------------------------
PCT                            N
REFRESH_COMPLETE               Y
REFRESH_FAST                   N
REWRITE                        Y
PCT_TABLE                      N PS_LEDGER            PCT not supported with multi-column partition key
REFRESH_FAST_AFTER_INSERT      N SCOTT.PS_LEDGER      the detail table does not have a materialized view log
REFRESH_FAST_AFTER_ONETAB_DML  N POSTED_TOTAL_AMT     SUM(expr) without COUNT(expr)
REFRESH_FAST_AFTER_ONETAB_DML  N                      see the reason why REFRESH_FAST_AFTER_INSERT is disabled
REFRESH_FAST_AFTER_ONETAB_DML  N                      COUNT(*) is not present in the select list
REFRESH_FAST_AFTER_ONETAB_DML  N                      SUM(expr) without COUNT(expr)
REFRESH_FAST_AFTER_ANY_DML     N                      see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_PCT               N                      PCT is not possible on any of the detail tables in the mater
REWRITE_FULL_TEXT_MATCH        Y
REWRITE_PARTIAL_TEXT_MATCH     Y
REWRITE_GENERAL                Y
REWRITE_PCT                    N                      general rewrite is not possible or PCT is not possible on an
PCT_TABLE_REWRITE              N PS_LEDGER            PCT not supported with multi-column partition key
At the moment, the materialized views are up to date.
SELECT L.TREE_NODE_NUM,L2.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM   PS_LEDGER A
,      PSTREESELECT05 L1
,      PSTREESELECT10 L
,      PSTREESELECT10 L2
WHERE  A.LEDGER='ACTUALS'
AND    A.FISCAL_YEAR=2020
AND    (A.ACCOUNTING_PERIOD BETWEEN 1 AND 6)
AND    L1.SELECTOR_NUM=30982 AND A.BUSINESS_UNIT=L1.RANGE_FROM_05
AND    L.SELECTOR_NUM=30985 AND A.CHARTFIELD1=L.RANGE_FROM_10
AND    L2.SELECTOR_NUM=30984 AND A.ACCOUNT=L2.RANGE_FROM_10
AND    A.CURRENCY_CD='GBP'
GROUP BY L.TREE_NODE_NUM,L2.TREE_NODE_NUM
/
And I get MV rewrite because the MV is up to date. Note that Oracle only probed partitions 2 to 7, so it correctly pruned partitions.
Plan hash value: 3290858815
--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                   |  5573 |   239K|   276   (3)| 00:00:01 |       |       |
|   1 |  HASH GROUP BY                       |                   |  5573 |   239K|   276   (3)| 00:00:01 |       |       |
|*  2 |   HASH JOIN                          |                   |  5573 |   239K|   275   (3)| 00:00:01 |       |       |
|   3 |    JOIN FILTER CREATE                | :BF0000           |     2 |    22 |     1   (0)| 00:00:01 |       |       |
|*  4 |     INDEX RANGE SCAN                 | PS_PSTREESELECT05 |     2 |    22 |     1   (0)| 00:00:01 |       |       |
|   5 |    VIEW                              | VW_GBC_17         |  5573 |   179K|   274   (3)| 00:00:01 |       |       |
|   6 |     HASH GROUP BY                    |                   |  5573 |   364K|   274   (3)| 00:00:01 |       |       |
|   7 |      JOIN FILTER USE                 | :BF0000           |  5573 |   364K|   273   (2)| 00:00:01 |       |       |
|*  8 |       HASH JOIN                      |                   |  5573 |   364K|   273   (2)| 00:00:01 |       |       |
|*  9 |        INDEX RANGE SCAN              | PS_PSTREESELECT10 |   239 |  4541 |     2   (0)| 00:00:01 |       |       |
|* 10 |        HASH JOIN                     |                   | 23295 |  1091K|   270   (2)| 00:00:01 |       |       |
|* 11 |         INDEX RANGE SCAN             | PS_PSTREESELECT10 |    77 |  1386 |     2   (0)| 00:00:01 |       |       |
|  12 |         PARTITION RANGE ITERATOR     |                   |   301K|  8827K|   267   (2)| 00:00:01 |     2 |     7 |
|* 13 |          MAT_VIEW REWRITE ACCESS FULL| MV_LEDGER_2020    |   301K|  8827K|   267   (2)| 00:00:01 |     2 |     7 |
--------------------------------------------------------------------------------------------------------------------------
Now I will add more random data for the financial year 2020, accounting period 7. So there have been changes to just one partition.
REM pop2020m7.sql
insert into ps_ledger
with n as (
SELECT rownum n from dual connect by level <= 1e6/13
)
select 'BU'||LTRIM(TO_CHAR(CASE WHEN dbms_random.value <= .9 THEN 1 ELSE 2 END,'000')) business_unit 
,      'ACTUALS' ledger
,      'ACC'||LTRIM(TO_CHAR(999*SQRT(dbms_random.value),'000')) account 
,      'ALTACCT'||LTRIM(TO_CHAR(999*dbms_random.value,'000')) altacct
,      'DEPT'||LTRIM(TO_CHAR(9999*dbms_random.value,'0000')) deptid
,      'OPUNIT'||LTRIM(TO_CHAR(99*dbms_random.value,'00')) operating_unit
,      'P'||LTRIM(TO_CHAR(99999*dbms_random.value,'00000')) product 
,      'FUND'||LTRIM(TO_CHAR(9*dbms_random.value,'0')) fund_code
,      'CLAS'||LTRIM(TO_CHAR(9*dbms_random.value,'0')) class_fld
,      'PROD'||LTRIM(TO_CHAR(9*dbms_random.value,'0')) program_code
,      ' ' budget_ref
,      'AF'||LTRIM(TO_CHAR(999*dbms_random.value,'000')) affiliate 
,      'AFI'||LTRIM(TO_CHAR(99999*dbms_random.value,'00000')) affiliate_intra1
,      'AFI'||LTRIM(TO_CHAR( 9999*dbms_random.value,'0000')) affiliate_intra2
,      'CF'||LTRIM(TO_CHAR(  999*SQRT(dbms_random.value),'000')) chartfield1
,      'CF'||LTRIM(TO_CHAR(99999*dbms_random.value,'00000')) chartfield2
,      'CF'||LTRIM(TO_CHAR( 9999*dbms_random.value,'0000')) chartfield3
,      'PRJ'||LTRIM(TO_CHAR(9999*dbms_random.value,'0000')) project_id
,      'BK'||LTRIM(TO_CHAR(99*dbms_random.value,'00')) book_code
,      'GL'||LTRIM(TO_CHAR(99*dbms_random.value,'00')) gl_adjust_type
,      'GBP' currency_cd 
,      ' ' statistics_code 
,      2020 fiscal_year
,      7 accounting_period
,      dbms_random.value(0,1e6) posted_total_amt 
,      0 posted_base_amt 
,      0 posted_tran_amt 
,      'GBP' base_currency 
,      SYSDATE dttm_stamp_sec
,      0 process_instance 
FROM   n
/
set lines 200 pages 999 autotrace off
commit;
column owner format a10
column table_name format a15
column mview_name format a15
column detailobj_owner format a10 heading 'Detailobj|Owner'
column detailobj_name  format a15
column detailobj_alias format a20
column detail_partition_name format a20
column detail_subpartition_name format a20
column parent_table_partition format a20
select * from user_mview_detail_relations;
select * from user_mview_detail_partition;
select * from user_mview_detail_subpartition where freshness != 'FRESH';
SELECT MVIEW_NAME, STALENESS, LAST_REFRESH_TYPE, COMPILE_STATE FROM USER_MVIEWS ORDER BY MVIEW_NAME;
/
As soon as I have committed the insert, both the MVs need to be refreshed, even though none of the data queried by MV_LEDGER_2019 was changed. USER_MVIEW_DETAIL_RELATIONS reports PCT not applicable. No individual partitions are listed as stale.
MVIEW_NAME      STALENESS           LAST_REF COMPILE_STATE
--------------- ------------------- -------- -------------------
MV_LEDGER_2019  NEEDS_COMPILE       COMPLETE NEEDS_COMPILE
MV_LEDGER_2020  NEEDS_COMPILE       COMPLETE NEEDS_COMPILE

                           Detailobj
OWNER      MVIEW_NAME      Owner      DETAILOBJ_NAME  DETAILOBJ DETAILOBJ_ALIAS      D NUM_FRESH_PCT_PARTITIONS NUM_STALE_PCT_PARTITIONS
---------- --------------- ---------- --------------- --------- -------------------- - ------------------------ ------------------------
SCOTT      MV_LEDGER_2019  SCOTT      PS_LEDGER       TABLE     PS_LEDGER            N                       86                        0
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       TABLE     PS_LEDGER            N                       86                        0
I no longer get Query Rewrite for either fiscal year.
SELECT L.TREE_NODE_NUM,L2.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM   PS_LEDGER A
,      PSTREESELECT05 L1
,      PSTREESELECT10 L
,      PSTREESELECT10 L2
WHERE  A.LEDGER='ACTUALS'
AND    A.FISCAL_YEAR=2019
AND    A.ACCOUNTING_PERIOD BETWEEN 1 AND 6
AND    L1.SELECTOR_NUM=30982 AND A.BUSINESS_UNIT=L1.RANGE_FROM_05
AND    L.SELECTOR_NUM=30985 AND A.CHARTFIELD1=L.RANGE_FROM_10
AND    L2.SELECTOR_NUM=30984 AND A.ACCOUNT=L2.RANGE_FROM_10
AND    A.CURRENCY_CD='GBP'
GROUP BY L.TREE_NODE_NUM,L2.TREE_NODE_NUM
/ 

Plan hash value: 346876754
-----------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                   |   492 | 45756 |  2036   (1)| 00:00:01 |       |       |
|   1 |  HASH GROUP BY              |                   |   492 | 45756 |  2036   (1)| 00:00:01 |       |       |
|*  2 |   HASH JOIN                 |                   |   492 | 45756 |  2035   (1)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN         | PS_PSTREESELECT10 |   239 |  4541 |     2   (0)| 00:00:01 |       |       |
|*  4 |    HASH JOIN                |                   |  2055 |   148K|  2033   (1)| 00:00:01 |       |       |
|   5 |     MERGE JOIN CARTESIAN    |                   |   154 |  4466 |     3   (0)| 00:00:01 |       |       |
|*  6 |      INDEX RANGE SCAN       | PS_PSTREESELECT05 |     2 |    22 |     1   (0)| 00:00:01 |       |       |
|   7 |      BUFFER SORT            |                   |    77 |  1386 |     2   (0)| 00:00:01 |       |       |
|*  8 |       INDEX RANGE SCAN      | PS_PSTREESELECT10 |    77 |  1386 |     1   (0)| 00:00:01 |       |       |
|   9 |     PARTITION RANGE ITERATOR|                   | 26686 |  1172K|  2030   (1)| 00:00:01 |     3 |     8 |
|  10 |      PARTITION LIST SINGLE  |                   | 26686 |  1172K|  2030   (1)| 00:00:01 |     1 |     1 |
|* 11 |       TABLE ACCESS FULL     | PS_LEDGER         | 26686 |  1172K|  2030   (1)| 00:00:01 |   KEY |   KEY |
-----------------------------------------------------------------------------------------------------------------
Without PCT, I cannot do a partial refresh of a partitioned materialized view, and will not get query rewrite if just a single partition in the underlying table has changed, whether I need it for this query or not. 
So is there a different partitioning strategy that will permit PCT to work effectively?


Demonstration 2: Simple 1-Dimensional Range Partitioning 

Let's start with a simple range partitioned example; one partition per fiscal year.
CREATE TABLE ps_ledger
(business_unit VARCHAR2(5) NOT NULL
…
) PCTFREE 10 PCTUSED 80
PARTITION BY RANGE (FISCAL_YEAR) 
(PARTITION ledger_2018 VALUES LESS THAN (2019) PCTFREE 0 COMPRESS
,PARTITION ledger_2019 VALUES LESS THAN (2020) PCTFREE 0 COMPRESS
,PARTITION ledger_2020 VALUES LESS THAN (2021) PCTFREE 10 NOCOMPRESS
,PARTITION ledger_2021 VALUES LESS THAN (2022) PCTFREE 10 NOCOMPRESS)
ENABLE ROW MOVEMENT 
NOPARALLEL NOLOGGING
/
@treeselectors
@popledger
Now I am going to build a materialized view to summarise the ledger data by BUSINESS_UNIT, ACCOUNT and CHARTFIELD1, and of course by FISCAL_YEAR and ACCOUNTING_PERIOD.
CREATE MATERIALIZED VIEW mv_ledger_2020
PARTITION BY RANGE (FISCAL_YEAR)
(PARTITION ledger_2019 VALUES LESS THAN (2020) 
,PARTITION ledger_2020 VALUES LESS THAN (2021) 
) PCTFREE 0 COMPRESS NOPARALLEL
REFRESH COMPLETE ON DEMAND
ENABLE QUERY REWRITE AS 
SELECT business_unit, ledger, account, chartfield1, fiscal_year, accounting_period,
sum(posted_total_amt) posted_total_amt
FROM ps_ledger
WHERE fiscal_year >= 2019
AND   ledger = 'ACTUALS'
AND   currency_cd = 'GBP'
GROUP BY business_unit, ledger, account, chartfield1, fiscal_year, accounting_period
/
@mvpop
I can see the MV has partitions for 2019 and 2020 populated, and they contain fewer rows than the original.
                                          Sub-                                             Rows
                Part                      Part                                              per
TABLE_NAME       Pos PARTITION_NAME        Pos SUBPARTITION_NAME         NUM_ROWS BLOCKS  Block COMPRESS COMPRESS_FOR
--------------- ---- -------------------- ---- ------------------------- -------- ------ ------ -------- -------------------
MV_LEDGER_2020     1 LEDGER_2019                                                                ENABLED  BASIC
                   2 LEDGER_2020                                                                ENABLED  BASIC
                                                                          1456077   4864  299.4

PS_LEDGER          1 LEDGER_2018                                          1100000  17893   61.5 ENABLED  BASIC
                   2 LEDGER_2019                                          1100000  17892   61.5 ENABLED  BASIC
                   3 LEDGER_2020                                           637915  16456   38.8 DISABLED
                   4 LEDGER_2021                                           100000   2559   39.1 DISABLED
                                                                          2937915  54800   53.6
When I query 2018 ledger data, for which there is no materialized view, the execution plan shows that Oracle full scanned only the first partition of the PS_LEDGER table that contains the 2018 data. It eliminated the other partitions.
Plan hash value: 1780139226
---------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                   |   822 | 76446 |  4883   (1)| 00:00:01 |       |       |
|   1 |  HASH GROUP BY            |                   |   822 | 76446 |  4883   (1)| 00:00:01 |       |       |
|*  2 |   HASH JOIN               |                   |   822 | 76446 |  4882   (1)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN       | PS_PSTREESELECT10 |   228 |  4332 |     2   (0)| 00:00:01 |       |       |
|*  4 |    HASH JOIN              |                   |  3601 |   260K|  4880   (1)| 00:00:01 |       |       |
|   5 |     MERGE JOIN CARTESIAN  |                   |   180 |  5220 |     3   (0)| 00:00:01 |       |       |
|*  6 |      INDEX RANGE SCAN     | PS_PSTREESELECT05 |     2 |    22 |     1   (0)| 00:00:01 |       |       |
|   7 |      BUFFER SORT          |                   |    90 |  1620 |     2   (0)| 00:00:01 |       |       |
|*  8 |       INDEX RANGE SCAN    | PS_PSTREESELECT10 |    90 |  1620 |     1   (0)| 00:00:01 |       |       |
|   9 |     PARTITION RANGE SINGLE|                   | 39970 |  1756K|  4877   (1)| 00:00:01 |     1 |     1 |
|* 10 |      TABLE ACCESS FULL    | PS_LEDGER         | 39970 |  1756K|  4877   (1)| 00:00:01 |     1 |     1 |
---------------------------------------------------------------------------------------------------------------
…
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("A"."ACCOUNT"="L2"."RANGE_FROM_10")
   3 - access("L2"."SELECTOR_NUM"=30984)
   4 - access("A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05" AND "A"."CHARTFIELD1"="L"."RANGE_FROM_10")
   6 - access("L1"."SELECTOR_NUM"=30982)
   8 - access("L"."SELECTOR_NUM"=30985)
  10 - filter("A"."ACCOUNTING_PERIOD"<=6 AND "A"."LEDGER"='ACTUALS' AND "A"."FISCAL_YEAR"=2018 AND
              "A"."ACCOUNTING_PERIOD">=1 AND "A"."CURRENCY_CD"='GBP')
When I query the 2020 data, Oracle has rewritten the query to use the second partition of the materialised view. Again it only queried a single partition.
Plan hash value: 4006930814
----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |                   |  1088 | 88128 |   674   (2)| 00:00:01 |       |       |
|   1 |  HASH GROUP BY                   |                   |  1088 | 88128 |   674   (2)| 00:00:01 |       |       |
|*  2 |   HASH JOIN                      |                   |  1088 | 88128 |   673   (2)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN              | PS_PSTREESELECT10 |   228 |  4332 |     2   (0)| 00:00:01 |       |       |
|*  4 |    HASH JOIN                     |                   |  4767 |   288K|   671   (2)| 00:00:01 |       |       |
|   5 |     MERGE JOIN CARTESIAN         |                   |   180 |  5220 |     3   (0)| 00:00:01 |       |       |
|*  6 |      INDEX RANGE SCAN            | PS_PSTREESELECT05 |     2 |    22 |     1   (0)| 00:00:01 |       |       |
|   7 |      BUFFER SORT                 |                   |    90 |  1620 |     2   (0)| 00:00:01 |       |       |
|*  8 |       INDEX RANGE SCAN           | PS_PSTREESELECT10 |    90 |  1620 |     1   (0)| 00:00:01 |       |       |
|   9 |     PARTITION RANGE SINGLE       |                   | 52909 |  1705K|   668   (2)| 00:00:01 |     2 |     2 |
|* 10 |      MAT_VIEW REWRITE ACCESS FULL| MV_LEDGER_2020    | 52909 |  1705K|   668   (2)| 00:00:01 |     2 |     2 |
----------------------------------------------------------------------------------------------------------------------
…
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("MV_LEDGER_2020"."ACCOUNT"="L2"."RANGE_FROM_10")
   3 - access("L2"."SELECTOR_NUM"=30984)
   4 - access("MV_LEDGER_2020"."BUSINESS_UNIT"="L1"."RANGE_FROM_05" AND
              "MV_LEDGER_2020"."CHARTFIELD1"="L"."RANGE_FROM_10")
   6 - access("L1"."SELECTOR_NUM"=30982)
   8 - access("L"."SELECTOR_NUM"=30985)
  10 - filter("MV_LEDGER_2020"."ACCOUNTING_PERIOD"<=6 AND "MV_LEDGER_2020"."FISCAL_YEAR"=2020 AND
              "MV_LEDGER_2020"."ACCOUNTING_PERIOD">=1)
Now I am going to simulate running financial processing for period 7 in fiscal year 2020, by inserting data into PS_LEDGER for that period.
@pop2020m7.sql
The materialised view status and staleness on USER_MVIEWS changes to NEEDS_COMPILE when the insert into PS_LEDGER is committed. 
  • USER_MVIEW_DETAIL_RELATIONS shows that 1 tracked partition is stale but three are still fresh. 
  • USER_MVIEW_DETAIL_PARTITION shows the tracking status of each source partition. We can see that the LEDGER_2020 partition on PS_LEDGER is stale but the others are still fresh.
22:00:01 SQL> SELECT MVIEW_NAME, STALENESS, LAST_REFRESH_TYPE, COMPILE_STATE FROM USER_MVIEWS ORDER BY MVIEW_NAME;

MVIEW_NAME      STALENESS           LAST_REF COMPILE_STATE
--------------- ------------------- -------- -------------------
MV_LEDGER_2020  NEEDS_COMPILE       COMPLETE NEEDS_COMPILE

22:00:01 SQL> select * from user_mview_detail_relations;

                           Detailobj
OWNER      MVIEW_NAME      Owner      DETAILOBJ_NAME  DETAILOBJ DETAILOBJ_ALIAS      D NUM_FRESH_PCT_PARTITIONS NUM_STALE_PCT_PARTITIONS
---------- --------------- ---------- --------------- --------- -------------------- - ------------------------ ------------------------
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       TABLE     PS_LEDGER            Y                        3                        1

22:00:01 SQL> select * from user_mview_detail_partition;

                           Detailobj
OWNER      MVIEW_NAME      Owner      DETAILOBJ_NAME  DETAIL_PARTITION_NAM DETAIL_PARTITION_POSITION FRESHNE LAST_REFRESH_TIME
---------- --------------- ---------- --------------- -------------------- ------------------------- ------- -------------------
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       LEDGER_2018                                  1 FRESH   21:59:41 15/11/2020
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       LEDGER_2019                                  2 FRESH   21:59:41 15/11/2020
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       LEDGER_2020                                  3 STALE   21:59:41 15/11/2020
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       LEDGER_2021                                  4 FRESH   21:59:41 15/11/2020
The query on 2019 still rewrites because the 2019 partition is fresh
Plan hash value: 4006930814
----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |                   |  1088 | 88128 |   674   (2)| 00:00:01 |       |       |
|   1 |  HASH GROUP BY                   |                   |  1088 | 88128 |   674   (2)| 00:00:01 |       |       |
|*  2 |   HASH JOIN                      |                   |  1088 | 88128 |   673   (2)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN              | PS_PSTREESELECT10 |   228 |  4332 |     2   (0)| 00:00:01 |       |       |
|*  4 |    HASH JOIN                     |                   |  4767 |   288K|   671   (2)| 00:00:01 |       |       |
|   5 |     MERGE JOIN CARTESIAN         |                   |   180 |  5220 |     3   (0)| 00:00:01 |       |       |
|*  6 |      INDEX RANGE SCAN            | PS_PSTREESELECT05 |     2 |    22 |     1   (0)| 00:00:01 |       |       |
|   7 |      BUFFER SORT                 |                   |    90 |  1620 |     2   (0)| 00:00:01 |       |       |
|*  8 |       INDEX RANGE SCAN           | PS_PSTREESELECT10 |    90 |  1620 |     1   (0)| 00:00:01 |       |       |
|   9 |     PARTITION RANGE SINGLE       |                   | 52909 |  1705K|   668   (2)| 00:00:01 |     1 |     1 |
|* 10 |      MAT_VIEW REWRITE ACCESS FULL| MV_LEDGER_2020    | 52909 |  1705K|   668   (2)| 00:00:01 |     1 |     1 |
----------------------------------------------------------------------------------------------------------------------
…
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("MV_LEDGER_2020"."ACCOUNT"="L2"."RANGE_FROM_10")
   3 - access("L2"."SELECTOR_NUM"=30984)
   4 - access("MV_LEDGER_2020"."BUSINESS_UNIT"="L1"."RANGE_FROM_05" AND
              "MV_LEDGER_2020"."CHARTFIELD1"="L"."RANGE_FROM_10")
   6 - access("L1"."SELECTOR_NUM"=30982)
   8 - access("L"."SELECTOR_NUM"=30985)
  10 - filter("MV_LEDGER_2020"."ACCOUNTING_PERIOD"<=6 AND "MV_LEDGER_2020"."FISCAL_YEAR"=2019 AND
              "MV_LEDGER_2020"."ACCOUNTING_PERIOD">=1)
But we no longer get rewrite on the 2020 partition because it is stale. The query stays on PS_LEDGER.
Plan hash value: 1780139226

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                   |   477 | 44361 |  4483   (1)| 00:00:01 |       |       |
|   1 |  HASH GROUP BY            |                   |   477 | 44361 |  4483   (1)| 00:00:01 |       |       |
|*  2 |   HASH JOIN               |                   |   477 | 44361 |  4482   (1)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN       | PS_PSTREESELECT10 |   228 |  4332 |     2   (0)| 00:00:01 |       |       |
|*  4 |    HASH JOIN              |                   |  2090 |   151K|  4479   (1)| 00:00:01 |       |       |
|   5 |     MERGE JOIN CARTESIAN  |                   |   180 |  5220 |     3   (0)| 00:00:01 |       |       |
|*  6 |      INDEX RANGE SCAN     | PS_PSTREESELECT05 |     2 |    22 |     1   (0)| 00:00:01 |       |       |
|   7 |      BUFFER SORT          |                   |    90 |  1620 |     2   (0)| 00:00:01 |       |       |
|*  8 |       INDEX RANGE SCAN    | PS_PSTREESELECT10 |    90 |  1620 |     1   (0)| 00:00:01 |       |       |
|   9 |     PARTITION RANGE SINGLE|                   | 23179 |  1018K|  4476   (1)| 00:00:01 |     3 |     3 |
|* 10 |      TABLE ACCESS FULL    | PS_LEDGER         | 23179 |  1018K|  4476   (1)| 00:00:01 |     3 |     3 |
---------------------------------------------------------------------------------------------------------------
…
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("A"."ACCOUNT"="L2"."RANGE_FROM_10")
   3 - access("L2"."SELECTOR_NUM"=30984)
   4 - access("A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05" AND "A"."CHARTFIELD1"="L"."RANGE_FROM_10")
   6 - access("L1"."SELECTOR_NUM"=30982)
   8 - access("L"."SELECTOR_NUM"=30985)
  10 - filter("A"."ACCOUNTING_PERIOD"<=6 AND "A"."LEDGER"='ACTUALS' AND "A"."FISCAL_YEAR"=2020 AND
              "A"."ACCOUNTING_PERIOD">=1 AND "A"."CURRENCY_CD"='GBP')
So now I have to refresh the view. I am going to use 
  • method P to indicate that it should use PCT,
  • atomic refresh is set to false because I want Oracle to truncate the partition and repopulate it in direct path mode so that the data is compressed (because I am not licenced for advanced compression).
I am also going to trace the refresh process so I can see what actually happened. I will give the trace file an identifying suffix to make it easier to find. I can query the trace file name from v$diag_info
I need to collect statistics myself, or they won't be updated.
REM mvtrc.sql
disconnect
connect scott/tiger@oracle_pdb

column name format a20
column value format a70
alter session set tracefile_identifier=PCT;
select * from v$diag_info where name like '%Trace%';

alter session set sql_trace = true;
exec dbms_mview.refresh(list=>'MV_LEDGER_2019',method=>'P',atomic_refresh=>FALSE);
exec dbms_mview.refresh(list=>'MV_LEDGER_2020',method=>'P',atomic_refresh=>FALSE);
alter session set sql_trace = false;
exec dbms_stats.gather_Table_stats(user,'MV_LEDGER_2019');
exec dbms_stats.gather_Table_stats(user,'MV_LEDGER_2020');
v$diag_info indicates the trace file
   INST_ID NAME                 VALUE                                                                      CON_ID
---------- -------------------- ---------------------------------------------------------------------- ----------
         1 Diag Trace           /u01/app/oracle/diag/rdbms/oracle/oracle/trace                                  0
         1 Default Trace File   /u01/app/oracle/diag/rdbms/oracle/oracle/trace/oracle_ora_7802_PCT.trc          0
I can see the total number of rows in MV_LEDGER_2020 has gone up from 1455085 to 1528980, reflecting the rows I inserted.
                                          Sub-                                             Rows
                Part                      Part                                              per
TABLE_NAME       Pos PARTITION_NAME        Pos SUBPARTITION_NAME         NUM_ROWS BLOCKS  Block COMPRESS COMPRESS_FOR
--------------- ---- -------------------- ---- ------------------------- -------- ------ ------ -------- ------------------------------
MV_LEDGER_2020     1 LEDGER_2019                                           946825   3173  298.4 ENABLED  BASIC
                   2 LEDGER_2020                                           582155   1926  302.3 ENABLED  BASIC
                                                                          1528980   5099  299.9

PS_LEDGER          1 LEDGER_2018                                          1100000  17893   61.5 ENABLED  BASIC
                   2 LEDGER_2019                                          1100000  17892   61.5 ENABLED  BASIC
                   3 LEDGER_2020                                           637915  16456   38.8 DISABLED
                   4 LEDGER_2021                                           100000   2559   39.1 DISABLED
                                                                          2937915  54800   53.6
I am just going to pick out the statements from the trace that alter the materialized view. I can see the LEDGER_2020 partition was truncated and then the data for the stale ledger partition is reinserted in direct path mode, so it will have been compressed. Statistics confirm this as I can calculate that the number of rows per block is still around 300.
…
/* MV_REFRESH (ATB) */ ALTER TABLE "SCOTT"."MV_LEDGER_2020" TRUNCATE  PARTITION LEDGER_2020
…
/* MV_REFRESH (INS) */ INSERT /*+ APPEND BYPASS_RECURSIVE_CHECK */  INTO "SCOTT"."MV_LEDGER_2020" PARTITION ( LEDGER_2020 ) ("BUSINESS_UNIT"
,"LEDGER", "ACCOUNT", "CHARTFIELD1", "FISCAL_YEAR", "ACCOUNTING_PERIOD", "POSTED_TOTAL_AMT") SELECT  /*+ X_DYN_PRUNE */ 
"PS_LEDGER"."BUSINESS_UNIT", "PS_LEDGER"."LEDGER" , "PS_LEDGER"."ACCOUNT" , "PS_LEDGER"."CHARTFIELD1" , "PS_LEDGER"."FISCAL_YEAR" P0,
"PS_LEDGER"."ACCOUNTING_PERIOD" ,SUM("PS_LEDGER"."POSTED_TOTAL_AMT")  FROM "PS_LEDGER" "PS_LEDGER" WHERE ("PS_LEDGER"."FISCAL_YEAR">=2019 
AND "PS_LEDGER"."LEDGER"='ACTUALS' AND "PS_LEDGER"."CURRENCY_CD"='GBP') AND  ( ( (  (  (  ( "PS_LEDGER"."FISCAL_YEAR" >= 2020 )  )  )  
AND  (  (  ( "PS_LEDGER"."FISCAL_YEAR" < 2021 )  )  ) )  ) ) GROUP BY "PS_LEDGER"."BUSINESS_UNIT","PS_LEDGER"."LEDGER"
,"PS_LEDGER"."ACCOUNT","PS_LEDGER"."CHARTFIELD1","PS_LEDGER"."FISCAL_YEAR","PS_LEDGER"."ACCOUNTING_PERIOD"
…
I can use EXPLAIN_MVIEW to check the status of MV_LEDGER_2020. PCT is enabled for refresh and rewrite.
CAPABILITY_NAME                P REL_TEXT             MSGTXT
------------------------------ - -------------------- ------------------------------------------------------------
PCT                            Y
REFRESH_COMPLETE               Y
REFRESH_FAST                   Y
REWRITE                        Y
PCT_TABLE                      Y PS_LEDGER
REFRESH_FAST_AFTER_INSERT      N SCOTT.PS_LEDGER      the detail table does not have a materialized view log
REFRESH_FAST_AFTER_ONETAB_DML  N POSTED_TOTAL_AMT     SUM(expr) without COUNT(expr)
REFRESH_FAST_AFTER_ONETAB_DML  N                      see the reason why REFRESH_FAST_AFTER_INSERT is disabled
REFRESH_FAST_AFTER_ONETAB_DML  N                      COUNT(*) is not present in the select list
REFRESH_FAST_AFTER_ONETAB_DML  N                      SUM(expr) without COUNT(expr)
REFRESH_FAST_AFTER_ANY_DML     N                      see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_PCT               Y
REWRITE_FULL_TEXT_MATCH        Y
REWRITE_PARTIAL_TEXT_MATCH     Y
REWRITE_GENERAL                Y
REWRITE_PCT                    Y
PCT_TABLE_REWRITE              Y PS_LEDGER
I can see PCT has worked.
  • I still get query rewrite for the partitions that are still fresh rather than stale.
  • The refresh process refreshes only the stale partitions. 
However, I have to regenerate the materialized view for the whole fiscal year, when I have only changed one accounting period. Could I organise it to refresh just a single accounting period?


Demonstration 3: Interval Partitioning 

This time I am going to use interval partitioning. I have explicitly specified the partitions for previous years because I don't want to allow any free space in the blocks, but the current and future partitions will be created automatically.
CREATE TABLE ps_ledger
(business_unit VARCHAR2(5) NOT NULL
…
) PCTFREE 10 PCTUSED 80
PARTITION BY RANGE (FISCAL_YEAR) INTERVAL (1)
(PARTITION ledger_2018 VALUES LESS THAN (2019) PCTFREE 0 COMPRESS
,PARTITION ledger_2019 VALUES LESS THAN (2020) PCTFREE 0 COMPRESS)
ENABLE ROW MOVEMENT NOLOGGING
/
@treeselectors
@popledger
I will similarly create a single materialized view with interval partitioning per fiscal year and populate it for 2019 onwards.
CREATE MATERIALIZED VIEW mv_ledger_2020
PARTITION BY RANGE (FISCAL_YEAR) INTERVAL (1)
(PARTITION ledger_2019 VALUES LESS THAN (2020)
) PCTFREE 0 COMPRESS
REFRESH COMPLETE ON DEMAND
ENABLE QUERY REWRITE AS
SELECT business_unit, account, chartfield1, fiscal_year, accounting_period,
sum(posted_total_amt) posted_total_amt
FROM ps_ledger
WHERE fiscal_year >= 2019
AND   ledger = 'ACTUALS'
AND   currency_cd = 'GBP'
GROUP BY business_unit, account, chartfield1, fiscal_year, accounting_period
/
@@mvpop
@@mvvol
@@mvsql
I get exactly the same behaviour as the previous demonstration. The only difference is that the new partitions have system generated names, but as before just one of them is identified as stale.
@pop2020m7.sql
23:25:42 SQL> SELECT MVIEW_NAME, STALENESS, LAST_REFRESH_TYPE, COMPILE_STATE FROM USER_MVIEWS ORDER BY MVIEW_NAME;

MVIEW_NAME      STALENESS           LAST_REF COMPILE_STATE
--------------- ------------------- -------- -------------------
MV_LEDGER_2020  NEEDS_COMPILE       COMPLETE NEEDS_COMPILE

23:25:42 SQL> select * from user_mview_detail_relations;

                           Detailobj
OWNER      MVIEW_NAME      Owner      DETAILOBJ_NAME  DETAILOBJ DETAILOBJ_ALIAS      D NUM_FRESH_PCT_PARTITIONS NUM_STALE_PCT_PARTITIONS
---------- --------------- ---------- --------------- --------- -------------------- - ------------------------ ------------------------
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       TABLE     PS_LEDGER            Y                        3                        1

23:25:42 SQL> select * from user_mview_detail_partition;

                           Detailobj
OWNER      MVIEW_NAME      Owner      DETAILOBJ_NAME  DETAIL_PARTITION_NAM DETAIL_PARTITION_POSITION FRESHNE LAST_REFRESH_TIME
---------- --------------- ---------- --------------- -------------------- ------------------------- ------- -------------------
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       LEDGER_2018                                  1 FRESH   23:25:21 15/11/2020
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       LEDGER_2019                                  2 FRESH   23:25:21 15/11/2020
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       SYS_P981                                     3 STALE   23:25:21 15/11/2020
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       SYS_P982                                     4 FRESH   23:25:21 15/11/2020
However, when I look in the trace of the refresh, I see that it has truncated and repopulated the partitions for both 2020 and 2021 even though I didn't change any of the data in the 2021 partition, and it is listed as fresh.
…
/* MV_REFRESH (ATB) */ ALTER TABLE "SCOTT"."MV_LEDGER_2020" TRUNCATE  PARTITION SYS_P987
…
/* MV_REFRESH (ATB) */ ALTER TABLE "SCOTT"."MV_LEDGER_2020" TRUNCATE  PARTITION SYS_P986
…
/* MV_REFRESH (INS) */ INSERT /*+ APPEND BYPASS_RECURSIVE_CHECK */  FIRST  WHEN  (  (  (  ( "P0" >= 2020 )  )  )  AND  (  (  ( "P0" < 2021 )
)  )  )  THEN  INTO "SCOTT"."MV_LEDGER_2020" PARTITION (SYS_P986)("BUSINESS_UNIT", "ACCOUNT", "CHARTFIELD1", "FISCAL_YEAR", 
"ACCOUNTING_PERIOD", "POSTED_TOTAL_AMT")  WHEN  (  (  (  ( "P0" >= 2021 )  )  )  AND  (  (  ( "P0" < 2022 )  )  )  )  THEN  INTO 
"SCOTT"."MV_LEDGER_2020" PARTITION (SYS_P987)("BUSINESS_UNIT", "ACCOUNT", "CHARTFIELD1", "FISCAL_YEAR", "ACCOUNTING_PERIOD", 
"POSTED_TOTAL_AMT") SELECT  /*+ X_DYN_PRUNE */ "PS_LEDGER"."BUSINESS_UNIT" , "PS_LEDGER"."ACCOUNT" , "PS_LEDGER"."CHARTFIELD1" , 
"PS_LEDGER"."FISCAL_YEAR" P0, "PS_LEDGER"."ACCOUNTING_PERIOD" ,SUM("PS_LEDGER"."POSTED_TOTAL_AMT")  FROM "PS_LEDGER" "PS_LEDGER" WHERE
("PS_LEDGER"."FISCAL_YEAR">=2019 AND "PS_LEDGER"."LEDGER"='ACTUALS' AND "PS_LEDGER"."CURRENCY_CD"='GBP') AND  ( ( (  (  (  ( 
"PS_LEDGER"."FISCAL_YEAR" >= 2020 )  )  )  AND  (  (  ( "PS_LEDGER"."FISCAL_YEAR" < 2022 )  )  )  )  ) ) GROUP BY
"PS_LEDGER"."BUSINESS_UNIT","PS_LEDGER"."ACCOUNT","PS_LEDGER"."CHARTFIELD1","PS_LEDGER"."FISCAL_YEAR","PS_LEDGER"."ACCOUNTING_PERIOD"
…
In practice, in this particular case, it won't make a huge difference because there is no actuals data in 2021. The partition for 2021 has been created in the data dictionary, but due to deferred segment creation, it has not been physically created because there is no data in it. However, if I had updated data in 2019, then it would have truncated and repopulated two partitions (2019 and 2020). 
Interval partitioning is a form of range partitioning, so it is expected that PCT still works. However, I have no explanation as to why the partition following the stale partition was also refreshed. This might be a bug.

Demonstration 4: Composite (Range-List) Partitioning 

This time I am going to create a composite partitioned table. It will have the same range partitioning on FISCAL_YEAR, but then I will list subpartition it by ACCOUTING_PERIOD with 14 periods per fiscal year. I will use a template so that each partition will have the same subpartitions.
CREATE TABLE ps_ledger
(business_unit VARCHAR2(5) NOT NULL
…
) PCTFREE 10 PCTUSED 80
PARTITION BY RANGE (FISCAL_YEAR) 
SUBPARTITION BY LIST (ACCOUNTING_PERIOD)
SUBPARTITION TEMPLATE
(SUBPARTITION ap_bf VALUES (0) 
,SUBPARTITION ap_01 VALUES (1) 
,SUBPARTITION ap_02 VALUES (2) 
,SUBPARTITION ap_03 VALUES (3) 
,SUBPARTITION ap_04 VALUES (4) 
,SUBPARTITION ap_05 VALUES (5) 
,SUBPARTITION ap_06 VALUES (6) 
,SUBPARTITION ap_07 VALUES (7) 
,SUBPARTITION ap_08 VALUES (8) 
,SUBPARTITION ap_09 VALUES (9) 
,SUBPARTITION ap_10 VALUES (10) 
,SUBPARTITION ap_11 VALUES (11) 
,SUBPARTITION ap_12 VALUES (12) 
,SUBPARTITION ap_cf VALUES (DEFAULT))
 (PARTITION ledger_2018 VALUES LESS THAN (2019) PCTFREE 0 COMPRESS
,PARTITION ledger_2019 VALUES LESS THAN (2020) PCTFREE 0 COMPRESS
,PARTITION ledger_2020 VALUES LESS THAN (2021) 
,PARTITION ledger_2021 VALUES LESS THAN (2022) 
) ENABLE ROW MOVEMENT NOPARALLEL NOLOGGING
/
@treeselectors
@popledger
I will similarly partition the materialized view
CREATE MATERIALIZED VIEW mv_ledger_2020
PARTITION BY RANGE (FISCAL_YEAR) 
SUBPARTITION BY LIST (ACCOUNTING_PERIOD)
SUBPARTITION TEMPLATE
(SUBPARTITION ap_bf VALUES (0) 
,SUBPARTITION ap_01 VALUES (1) 
,SUBPARTITION ap_02 VALUES (2) 
,SUBPARTITION ap_03 VALUES (3) 
,SUBPARTITION ap_04 VALUES (4) 
,SUBPARTITION ap_05 VALUES (5) 
,SUBPARTITION ap_06 VALUES (6) 
,SUBPARTITION ap_07 VALUES (7) 
,SUBPARTITION ap_08 VALUES (8) 
,SUBPARTITION ap_09 VALUES (9) 
,SUBPARTITION ap_10 VALUES (10) 
,SUBPARTITION ap_11 VALUES (11) 
,SUBPARTITION ap_12 VALUES (12) 
,SUBPARTITION ap_cf VALUES (DEFAULT))
(PARTITION ledger_2019 VALUES LESS THAN (2020)
,PARTITION ledger_2020 VALUES LESS THAN (2021)
) PCTFREE 0 COMPRESS PARALLEL
REFRESH COMPLETE ON DEMAND
ENABLE QUERY REWRITE AS
SELECT business_unit, account, chartfield1, fiscal_year, accounting_period,
sum(posted_total_amt) posted_total_amt
FROM ps_ledger
WHERE fiscal_year >= 2019
AND   ledger = 'ACTUALS'
AND   currency_cd = 'GBP'
GROUP BY business_unit, account, chartfield1, fiscal_year, accounting_period
/
@mvpop
PCT does work properly. USER_MVIEW_DETAIL_PARTITION reports that one partition is stale USER_MVIEW_DETAIL_SUBPARTITION correctly identified that it is a stale sub-partition, but as expected, the materialized view refresh truncates the partition not the sub-partition and repopulates it. So we are still processing a whole fiscal year.
@pop2020m7.sql
17:40:03 SQL> SELECT MVIEW_NAME, STALENESS, LAST_REFRESH_TYPE, COMPILE_STATE FROM USER_MVIEWS ORDER BY MVIEW_NAME;

MVIEW_NAME      STALENESS           LAST_REF COMPILE_STATE
--------------- ------------------- -------- -------------------
MV_LEDGER_2020  NEEDS_COMPILE       COMPLETE NEEDS_COMPILE

17:40:03 SQL> select * from user_mview_detail_relations;

                           Detailobj
OWNER      MVIEW_NAME      Owner      DETAILOBJ_NAME  DETAILOBJ DETAILOBJ_ALIAS      D NUM_FRESH_PCT_PARTITIONS NUM_STALE_PCT_PARTITIONS
---------- --------------- ---------- --------------- --------- -------------------- - ------------------------ ------------------------
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       TABLE     PS_LEDGER            Y                       55                        1

17:40:10 SQL> select * from user_mview_detail_partition;

no rows selected

17:40:10 SQL> select * from user_mview_detail_subpartition where freshness != 'FRESH';

                           Detailobj
OWNER      MVIEW_NAME      Owner      DETAILOBJ_NAME  DETAIL_PARTITION_NAM DETAIL_SUBPARTITION_ DETAIL_SUBPARTITION_POSITION FRESH
---------- --------------- ---------- --------------- -------------------- -------------------- ---------------------------- -----
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       LEDGER_2020          LEDGER_2020_AP_07                               8 STALE
If I query periods 1 to 6 in 2020 using a BETWEEN, this is then expanded to two inequalities that I can see in the predicate section. These subpartitions are up to date, and Oracle performs query rewrite.
Plan hash value: 1400212726
-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name              | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                   | 12260 |   969K|       |   664   (1)| 00:00:01 |       |       |
|   1 |  HASH GROUP BY                    |                   | 12260 |   969K|  1128K|   664   (1)| 00:00:01 |       |       |
|*  2 |   HASH JOIN                       |                   | 12260 |   969K|       |   428   (2)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN               | PS_PSTREESELECT10 |   270 |  5130 |       |     2   (0)| 00:00:01 |       |       |
|*  4 |    HASH JOIN                      |                   | 45363 |  2746K|       |   425   (1)| 00:00:01 |       |       |
|   5 |     MERGE JOIN CARTESIAN          |                   |   182 |  5278 |       |     3   (0)| 00:00:01 |       |       |
|*  6 |      INDEX RANGE SCAN             | PS_PSTREESELECT05 |     2 |    22 |       |     1   (0)| 00:00:01 |       |       |
|   7 |      BUFFER SORT                  |                   |    91 |  1638 |       |     2   (0)| 00:00:01 |       |       |
|*  8 |       INDEX RANGE SCAN            | PS_PSTREESELECT10 |    91 |  1638 |       |     1   (0)| 00:00:01 |       |       |
|   9 |     PARTITION RANGE SINGLE        |                   |   497K|    15M|       |   421   (1)| 00:00:01 |     2 |     2 |
|  10 |      PARTITION LIST ITERATOR      |                   |   497K|    15M|       |   421   (1)| 00:00:01 |   KEY |   KEY |
|* 11 |       MAT_VIEW REWRITE ACCESS FULL| MV_LEDGER_2020    |   497K|    15M|       |   421   (1)| 00:00:01 |    15 |    28 |
-------------------------------------------------------------------------------------------------------------------------------
…
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("MV_LEDGER_2020"."ACCOUNT"="L2"."RANGE_FROM_10")
   3 - access("L2"."SELECTOR_NUM"=30984)
   4 - access("MV_LEDGER_2020"."BUSINESS_UNIT"="L1"."RANGE_FROM_05" AND
              "MV_LEDGER_2020"."CHARTFIELD1"="L"."RANGE_FROM_10")
   6 - access("L1"."SELECTOR_NUM"=30982)
   8 - access("L"."SELECTOR_NUM"=30985)
  11 - filter("MV_LEDGER_2020"."ACCOUNTING_PERIOD">=1 AND "MV_LEDGER_2020"."ACCOUNTING_PERIOD"<=6 AND
              "MV_LEDGER_2020"."FISCAL_YEAR"=2020)
But if I create period 7 in fiscal year 2020, then that subpartition is stale and Oracle leaves the query against that period as submitted to run against PS_LEDGER.
Plan hash value: 3964652976
---------------------------------------------------------------------------------------------------------------------------------------------
|   Id  | Operation                                             | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------------------------------------
|     0 | SELECT STATEMENT                                      |                   |     1 |    92 |     7  (15)| 00:00:01 |       |       |
|     1 |  HASH GROUP BY                                        |                   |     1 |    92 |     7  (15)| 00:00:01 |       |       |
|- *  2 |   HASH JOIN                                           |                   |     1 |    92 |     6   (0)| 00:00:01 |       |       |
|     3 |    NESTED LOOPS                                       |                   |     1 |    92 |     6   (0)| 00:00:01 |       |       |
|-    4 |     STATISTICS COLLECTOR                              |                   |       |       |            |          |       |       |
|- *  5 |      HASH JOIN                                        |                   |     1 |    73 |     5   (0)| 00:00:01 |       |       |
|     6 |       NESTED LOOPS                                    |                   |     1 |    73 |     5   (0)| 00:00:01 |       |       |
|-    7 |        STATISTICS COLLECTOR                           |                   |       |       |            |          |       |       |
|- *  8 |         HASH JOIN                                     |                   |     1 |    55 |     4   (0)| 00:00:01 |       |       |
|     9 |          NESTED LOOPS                                 |                   |     1 |    55 |     4   (0)| 00:00:01 |       |       |
|-   10 |           STATISTICS COLLECTOR                        |                   |       |       |            |          |       |       |
|    11 |            PARTITION RANGE SINGLE                     |                   |     1 |    44 |     3   (0)| 00:00:01 |     3 |     3 |
|    12 |             PARTITION LIST SINGLE                     |                   |     1 |    44 |     3   (0)| 00:00:01 |   KEY |   KEY |
|  * 13 |              TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PS_LEDGER         |     1 |    44 |     3   (0)| 00:00:01 |    36 |    36 |
|  * 14 |               INDEX RANGE SCAN                        | PSXLEDGER         |     1 |       |     2   (0)| 00:00:01 |    36 |    36 |
|  * 15 |           INDEX RANGE SCAN                            | PS_PSTREESELECT05 |     1 |    11 |     1   (0)| 00:00:01 |       |       |
|- * 16 |          INDEX RANGE SCAN                             | PS_PSTREESELECT05 |     1 |    11 |     1   (0)| 00:00:01 |       |       |
|  * 17 |        INDEX RANGE SCAN                               | PS_PSTREESELECT10 |     1 |    18 |     1   (0)| 00:00:01 |       |       |
|- * 18 |       INDEX RANGE SCAN                                | PS_PSTREESELECT10 |     1 |    18 |     1   (0)| 00:00:01 |       |       |
|  * 19 |     INDEX RANGE SCAN                                  | PS_PSTREESELECT10 |     1 |    19 |     1   (0)| 00:00:01 |       |       |
|- * 20 |    INDEX RANGE SCAN                                   | PS_PSTREESELECT10 |     1 |    19 |     1   (0)| 00:00:01 |       |       |
---------------------------------------------------------------------------------------------------------------------------------------------
…

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("A"."ACCOUNT"="L2"."RANGE_FROM_10")
   5 - access("A"."CHARTFIELD1"="L"."RANGE_FROM_10")
   8 - access("A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05")
  13 - filter("A"."CURRENCY_CD"='GBP')
  14 - access("A"."LEDGER"='ACTUALS' AND "A"."FISCAL_YEAR"=2020 AND "A"."ACCOUNTING_PERIOD"=7)
  15 - access("L1"."SELECTOR_NUM"=30982 AND "A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05")
       filter("A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05")
  16 - access("L1"."SELECTOR_NUM"=30982)
  17 - access("L"."SELECTOR_NUM"=30985 AND "A"."CHARTFIELD1"="L"."RANGE_FROM_10")
       filter("A"."CHARTFIELD1"="L"."RANGE_FROM_10")
  18 - access("L"."SELECTOR_NUM"=30985)
  19 - access("L2"."SELECTOR_NUM"=30984 AND "A"."ACCOUNT"="L2"."RANGE_FROM_10")
       filter("A"."ACCOUNT"="L2"."RANGE_FROM_10")
  20 - access("L2"."SELECTOR_NUM"=30984)
So PCT also controls query rewrite correctly on list partitioning. Again, when I look at the trace of the stale partition refresh, the entire 2020 partition was truncated and refreshed in direct-path mode. There is no accounting period criterion on the insert statement.
…
/* MV_REFRESH (ATB) */ ALTER TABLE "SCOTT"."MV_LEDGER_2020" TRUNCATE  PARTITION LEDGER_2020
…
/* MV_REFRESH (INS) */ INSERT /*+ APPEND BYPASS_RECURSIVE_CHECK */  INTO "SCOTT"."MV_LEDGER_2020" PARTITION ( LEDGER_2020 ) ("BUSINESS_UNIT",
"ACCOUNT", "CHARTFIELD1", "FISCAL_YEAR", "ACCOUNTING_PERIOD", "POSTED_TOTAL_AMT") SELECT  /*+ X_DYN_PRUNE */ "PS_LEDGER"."BUSINESS_UNIT" ,
"PS_LEDGER"."ACCOUNT" , "PS_LEDGER"."CHARTFIELD1" , "PS_LEDGER"."FISCAL_YEAR" P0, "PS_LEDGER"."ACCOUNTING_PERIOD" , 
SUM("PS_LEDGER"."POSTED_TOTAL_AMT")  FROM "PS_LEDGER" "PS_LEDGER" WHERE ("PS_LEDGER"."FISCAL_YEAR"=2020 AND "PS_LEDGER"."LEDGER"='ACTUALS' 
AND "PS_LEDGER"."CURRENCY_CD"='GBP') AND  ( ( (  (  (  ( "PS_LEDGER"."FISCAL_YEAR" < 2021 )  )  )  )  ) )GROUP BY
"PS_LEDGER"."BUSINESS_UNIT","PS_LEDGER"."ACCOUNT","PS_LEDGER"."CHARTFIELD1","PS_LEDGER"."FISCAL_YEAR","PS_LEDGER"."ACCOUNTING_PERIOD"
…

Demonstration 5: Composite (Range-Range) Partitioning

I am still composite partitioning the ledger table and materialized view in this test. It will have the same range partitioning on FISCAL_YEAR, but this time I will range subpartition it by ACCOUTING_PERIOD with 14 periods per fiscal year. I will use a template so that each partition will have the same subpartitions.
CREATE TABLE ps_ledger
(business_unit VARCHAR2(5) NOT NULL
…
) PCTFREE 10 PCTUSED 80
PARTITION BY RANGE (FISCAL_YEAR) 
SUBPARTITION BY RANGE (ACCOUNTING_PERIOD)
SUBPARTITION TEMPLATE
(SUBPARTITION ap_bf VALUES LESS THAN (1) 
,SUBPARTITION ap_01 VALUES LESS THAN (2) 
,SUBPARTITION ap_02 VALUES LESS THAN (3) 
,SUBPARTITION ap_03 VALUES LESS THAN (4) 
,SUBPARTITION ap_04 VALUES LESS THAN (5) 
,SUBPARTITION ap_05 VALUES LESS THAN (6) 
,SUBPARTITION ap_06 VALUES LESS THAN (7) 
,SUBPARTITION ap_07 VALUES LESS THAN (8) 
,SUBPARTITION ap_08 VALUES LESS THAN (9) 
,SUBPARTITION ap_09 VALUES LESS THAN (10) 
,SUBPARTITION ap_10 VALUES LESS THAN (11) 
,SUBPARTITION ap_11 VALUES LESS THAN (12) 
,SUBPARTITION ap_12 VALUES LESS THAN (13) 
,SUBPARTITION ap_cf VALUES LESS THAN (MAXVALUE))
 (PARTITION ledger_2018 VALUES LESS THAN (2019) PCTFREE 0 COMPRESS
,PARTITION ledger_2019 VALUES LESS THAN (2020) PCTFREE 0 COMPRESS
,PARTITION ledger_2020 VALUES LESS THAN (2021) 
,PARTITION ledger_2021 VALUES LESS THAN (2022) 
)
ENABLE ROW MOVEMENT NOLOGGING
/
@treeselectors
@popledger
This time I will create one materialized view with two range partitions for two fiscal years
CREATE MATERIALIZED VIEW mv_ledger_2020
PARTITION BY RANGE (FISCAL_YEAR)
SUBPARTITION BY RANGE (ACCOUNTING_PERIOD)
SUBPARTITION TEMPLATE
(SUBPARTITION ap_bf VALUES LESS THAN (1) 
,SUBPARTITION ap_01 VALUES LESS THAN (2) 
,SUBPARTITION ap_02 VALUES LESS THAN (3) 
,SUBPARTITION ap_03 VALUES LESS THAN (4) 
,SUBPARTITION ap_04 VALUES LESS THAN (5) 
,SUBPARTITION ap_05 VALUES LESS THAN (6) 
,SUBPARTITION ap_06 VALUES LESS THAN (7) 
,SUBPARTITION ap_07 VALUES LESS THAN (8) 
,SUBPARTITION ap_08 VALUES LESS THAN (9) 
,SUBPARTITION ap_09 VALUES LESS THAN (10) 
,SUBPARTITION ap_10 VALUES LESS THAN (11) 
,SUBPARTITION ap_11 VALUES LESS THAN (12) 
,SUBPARTITION ap_12 VALUES LESS THAN (13) 
,SUBPARTITION ap_cf VALUES LESS THAN (MAXVALUE))
(PARTITION ledger_2019 VALUES LESS THAN (2020)
,PARTITION ledger_2020 VALUES LESS THAN (2021)
) PCTFREE 0 COMPRESS PARALLEL
REFRESH COMPLETE ON DEMAND
ENABLE QUERY REWRITE AS
SELECT business_unit, account, chartfield1, fiscal_year, accounting_period,
sum(posted_total_amt) posted_total_amt
FROM ps_ledger
WHERE fiscal_year >= 2019
AND   ledger = 'ACTUALS'
AND   currency_cd = 'GBP'
GROUP BY business_unit, account, chartfield1, fiscal_year, accounting_period
/
@mvpop
After inserting and committing data for fiscal year 2020, period 7 USER_MVIEW_DETAIL_SUBPARTITION correctly identified the one stale sub-partition, and USER_MVIEW_DETAIL_PARTITION reports that one range subpartition is stale.
@pop2020m7.sql
19:09:50 SQL> SELECT MVIEW_NAME, STALENESS, LAST_REFRESH_TYPE, COMPILE_STATE FROM USER_MVIEWS ORDER BY MVIEW_NAME;

MVIEW_NAME      STALENESS           LAST_REF COMPILE_STATE
--------------- ------------------- -------- -------------------
MV_LEDGER_2020  NEEDS_COMPILE       COMPLETE NEEDS_COMPILE

19:09:50 SQL> select * from user_mview_detail_relations;

                           Detailobj
OWNER      MVIEW_NAME      Owner      DETAILOBJ_NAME  DETAILOBJ DETAILOBJ_ALIAS      D NUM_FRESH_PCT_PARTITIONS NUM_STALE_PCT_PARTITIONS
---------- --------------- ---------- --------------- --------- -------------------- - ------------------------ ------------------------
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       TABLE     PS_LEDGER            Y                       55                        1

19:09:56 SQL> select * from user_mview_detail_subpartition where freshness != 'FRESH';

                           Detailobj
OWNER      MVIEW_NAME      Owner      DETAILOBJ_NAME  DETAIL_PARTITION_NAM DETAIL_SUBPARTITION_ DETAIL_SUBPARTITION_POSITION FRESH
---------- --------------- ---------- --------------- -------------------- -------------------- ---------------------------- -----
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       LEDGER_2020          LEDGER_2020_AP_07                               8 STALE
Query rewrite continues to work on the fresh partitions.
Plan hash value: 589110139
-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name              | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                   | 13427 |  1062K|       |   683   (1)| 00:00:01 |       |       |
|   1 |  HASH GROUP BY                    |                   | 13427 |  1062K|  1232K|   683   (1)| 00:00:01 |       |       |
|*  2 |   HASH JOIN                       |                   | 13427 |  1062K|       |   427   (2)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN               | PS_PSTREESELECT10 |   257 |  4883 |       |     2   (0)| 00:00:01 |       |       |
|*  4 |    HASH JOIN                      |                   | 52141 |  3156K|       |   424   (1)| 00:00:01 |       |       |
|   5 |     MERGE JOIN CARTESIAN          |                   |   210 |  6090 |       |     3   (0)| 00:00:01 |       |       |
|*  6 |      INDEX RANGE SCAN             | PS_PSTREESELECT05 |     2 |    22 |       |     1   (0)| 00:00:01 |       |       |
|   7 |      BUFFER SORT                  |                   |   105 |  1890 |       |     2   (0)| 00:00:01 |       |       |
|*  8 |       INDEX RANGE SCAN            | PS_PSTREESELECT10 |   105 |  1890 |       |     1   (0)| 00:00:01 |       |       |
|   9 |     PARTITION RANGE SINGLE        |                   |   496K|    15M|       |   420   (1)| 00:00:01 |     2 |     2 |
|  10 |      PARTITION RANGE ITERATOR     |                   |   496K|    15M|       |   420   (1)| 00:00:01 |     2 |     7 |
|* 11 |       MAT_VIEW REWRITE ACCESS FULL| MV_LEDGER_2020    |   496K|    15M|       |   420   (1)| 00:00:01 |    15 |    28 |
-------------------------------------------------------------------------------------------------------------------------------
…
Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("MV_LEDGER_2020"."ACCOUNT"="L2"."RANGE_FROM_10")
   3 - access("L2"."SELECTOR_NUM"=30984)
   4 - access("MV_LEDGER_2020"."BUSINESS_UNIT"="L1"."RANGE_FROM_05" AND
              "MV_LEDGER_2020"."CHARTFIELD1"="L"."RANGE_FROM_10")
   6 - access("L1"."SELECTOR_NUM"=30982)
   8 - access("L"."SELECTOR_NUM"=30985)
  11 - filter("MV_LEDGER_2020"."ACCOUNTING_PERIOD"<=6 AND "MV_LEDGER_2020"."FISCAL_YEAR"=2020)
PCT correctly identifies stale partition in this query on period 7 only and prevents query rewrite.
Plan hash value: 1321682226
---------------------------------------------------------------------------------------------------------------------------------------------
|   Id  | Operation                                             | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------------------------------------
|     0 | SELECT STATEMENT                                      |                   |     1 |    92 |     7  (15)| 00:00:01 |       |       |
|     1 |  HASH GROUP BY                                        |                   |     1 |    92 |     7  (15)| 00:00:01 |       |       |
|- *  2 |   HASH JOIN                                           |                   |     1 |    92 |     6   (0)| 00:00:01 |       |       |
|     3 |    NESTED LOOPS                                       |                   |     1 |    92 |     6   (0)| 00:00:01 |       |       |
|-    4 |     STATISTICS COLLECTOR                              |                   |       |       |            |          |       |       |
|- *  5 |      HASH JOIN                                        |                   |     1 |    73 |     5   (0)| 00:00:01 |       |       |
|     6 |       NESTED LOOPS                                    |                   |     1 |    73 |     5   (0)| 00:00:01 |       |       |
|-    7 |        STATISTICS COLLECTOR                           |                   |       |       |            |          |       |       |
|- *  8 |         HASH JOIN                                     |                   |     1 |    55 |     4   (0)| 00:00:01 |       |       |
|     9 |          NESTED LOOPS                                 |                   |     1 |    55 |     4   (0)| 00:00:01 |       |       |
|-   10 |           STATISTICS COLLECTOR                        |                   |       |       |            |          |       |       |
|    11 |            PARTITION RANGE SINGLE                     |                   |     1 |    44 |     3   (0)| 00:00:01 |     3 |     3 |
|    12 |             PARTITION RANGE SINGLE                    |                   |     1 |    44 |     3   (0)| 00:00:01 |     8 |     8 |
|  * 13 |              TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PS_LEDGER         |     1 |    44 |     3   (0)| 00:00:01 |    36 |    36 |
|  * 14 |               INDEX RANGE SCAN                        | PSXLEDGER         |     1 |       |     2   (0)| 00:00:01 |    36 |    36 |
|  * 15 |           INDEX RANGE SCAN                            | PS_PSTREESELECT05 |     1 |    11 |     1   (0)| 00:00:01 |       |       |
|- * 16 |          INDEX RANGE SCAN                             | PS_PSTREESELECT05 |     1 |    11 |     1   (0)| 00:00:01 |       |       |
|  * 17 |        INDEX RANGE SCAN                               | PS_PSTREESELECT10 |     1 |    18 |     1   (0)| 00:00:01 |       |       |
|- * 18 |       INDEX RANGE SCAN                                | PS_PSTREESELECT10 |     1 |    18 |     1   (0)| 00:00:01 |       |       |
|  * 19 |     INDEX RANGE SCAN                                  | PS_PSTREESELECT10 |     1 |    19 |     1   (0)| 00:00:01 |       |       |
|- * 20 |    INDEX RANGE SCAN                                   | PS_PSTREESELECT10 |     1 |    19 |     1   (0)| 00:00:01 |       |       |
---------------------------------------------------------------------------------------------------------------------------------------------
…
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("A"."ACCOUNT"="L2"."RANGE_FROM_10")
   5 - access("A"."CHARTFIELD1"="L"."RANGE_FROM_10")
   8 - access("A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05")
  13 - filter("A"."CURRENCY_CD"='GBP')
  14 - access("A"."LEDGER"='ACTUALS' AND "A"."FISCAL_YEAR"=2020 AND "A"."ACCOUNTING_PERIOD"=7)
  15 - access("L1"."SELECTOR_NUM"=30982 AND "A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05")
       filter("A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05")
  16 - access("L1"."SELECTOR_NUM"=30982)
  17 - access("L"."SELECTOR_NUM"=30985 AND "A"."CHARTFIELD1"="L"."RANGE_FROM_10")
       filter("A"."CHARTFIELD1"="L"."RANGE_FROM_10")
  18 - access("L"."SELECTOR_NUM"=30985)
  19 - access("L2"."SELECTOR_NUM"=30984 AND "A"."ACCOUNT"="L2"."RANGE_FROM_10")
       filter("A"."ACCOUNT"="L2"."RANGE_FROM_10")
  20 - access("L2"."SELECTOR_NUM"=30984)
The query rewrite is prevented if a stale partition is not pruned. It is all or nothing. The query is not expanded and then rewritten to use materialised view for periods 1 to 6 and then the underlying table for period 7.
Plan hash value: 3827045647
------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                   |   561 | 52173 |  3670   (1)| 00:00:01 |       |       |
|   1 |  HASH GROUP BY               |                   |   561 | 52173 |  3670   (1)| 00:00:01 |       |       |
|*  2 |   HASH JOIN                  |                   |   561 | 52173 |  3669   (1)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN          | PS_PSTREESELECT10 |   227 |  4313 |     2   (0)| 00:00:01 |       |       |
|*  4 |    HASH JOIN                 |                   |  2468 |   178K|  3667   (1)| 00:00:01 |       |       |
|   5 |     MERGE JOIN CARTESIAN     |                   |   210 |  6090 |     3   (0)| 00:00:01 |       |       |
|*  6 |      INDEX RANGE SCAN        | PS_PSTREESELECT05 |     2 |    22 |     1   (0)| 00:00:01 |       |       |
|   7 |      BUFFER SORT             |                   |   105 |  1890 |     2   (0)| 00:00:01 |       |       |
|*  8 |       INDEX RANGE SCAN       | PS_PSTREESELECT10 |   105 |  1890 |     1   (0)| 00:00:01 |       |       |
|   9 |     PARTITION RANGE SINGLE   |                   | 23486 |  1032K|  3664   (1)| 00:00:01 |     3 |     3 |
|  10 |      PARTITION RANGE ITERATOR|                   | 23486 |  1032K|  3664   (1)| 00:00:01 |     2 |     8 |
|* 11 |       TABLE ACCESS FULL      | PS_LEDGER         | 23486 |  1032K|  3664   (1)| 00:00:01 |    29 |    42 |
------------------------------------------------------------------------------------------------------------------
…
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("A"."ACCOUNT"="L2"."RANGE_FROM_10")
   3 - access("L2"."SELECTOR_NUM"=30984)
   4 - access("A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05" AND "A"."CHARTFIELD1"="L"."RANGE_FROM_10")
   6 - access("L1"."SELECTOR_NUM"=30982)
   8 - access("L"."SELECTOR_NUM"=30985)
  11 - filter("A"."ACCOUNTING_PERIOD"<=7 AND "A"."LEDGER"='ACTUALS' AND "A"."FISCAL_YEAR"=2020 AND
              "A"."CURRENCY_CD"='GBP')
Again, the materialized view refresh process truncates and repopulates the whole partition not the sub-partition. So we are still processing a whole fiscal year.
…
/* MV_REFRESH (ATB) */ ALTER TABLE "SCOTT"."MV_LEDGER_2020" TRUNCATE  PARTITION LEDGER_2020
…
/* MV_REFRESH (INS) */ INSERT /*+ APPEND BYPASS_RECURSIVE_CHECK */  INTO "SCOTT"."MV_LEDGER_2020" PARTITION ( LEDGER_2020 ) ("BUSINESS_UNIT",
"ACCOUNT", "CHARTFIELD1", "FISCAL_YEAR", "ACCOUNTING_PERIOD", "POSTED_TOTAL_AMT") SELECT  /*+ X_DYN_PRUNE */ "PS_LEDGER"."BUSINESS_UNIT" , 
"PS_LEDGER"."ACCOUNT" , "PS_LEDGER"."CHARTFIELD1" , "PS_LEDGER"."FISCAL_YEAR" P0, "PS_LEDGER"."ACCOUNTING_PERIOD" , 
SUM("PS_LEDGER"."POSTED_TOTAL_AMT")  FROM "PS_LEDGER" "PS_LEDGER" WHERE ("PS_LEDGER"."FISCAL_YEAR"=2020 AND "PS_LEDGER"."LEDGER"='ACTUALS' 
AND "PS_LEDGER"."CURRENCY_CD"='GBP') AND  ( ( (  (  (  ( "PS_LEDGER"."FISCAL_YEAR" < 2021 )  )  )  )  ) )GROUP BY
"PS_LEDGER"."BUSINESS_UNIT","PS_LEDGER"."ACCOUNT","PS_LEDGER"."CHARTFIELD1","PS_LEDGER"."FISCAL_YEAR","PS_LEDGER"."ACCOUNTING_PERIOD"
…

Demonstration 6: Mismatching Partitioning

In this example, I am still composite partitioning the ledger table and materialized view. It will have the same range partitioning on FISCAL_YEAR, I still will range subpartition it by ACCOUTING_PERIOD with 14 periods per fiscal year. I will use a template so that each partition will have the same subpartitions.
CREATE TABLE ps_ledger
(business_unit VARCHAR2(5) NOT NULL
…
) PCTFREE 10 PCTUSED 80
PARTITION BY RANGE (FISCAL_YEAR) 
SUBPARTITION BY RANGE (ACCOUNTING_PERIOD)
SUBPARTITION TEMPLATE
(SUBPARTITION ap_bf VALUES LESS THAN (1) 
,SUBPARTITION ap_01 VALUES LESS THAN (2) 
,SUBPARTITION ap_02 VALUES LESS THAN (3) 
,SUBPARTITION ap_03 VALUES LESS THAN (4) 
,SUBPARTITION ap_04 VALUES LESS THAN (5) 
,SUBPARTITION ap_05 VALUES LESS THAN (6) 
,SUBPARTITION ap_06 VALUES LESS THAN (7) 
,SUBPARTITION ap_07 VALUES LESS THAN (8) 
,SUBPARTITION ap_08 VALUES LESS THAN (9) 
,SUBPARTITION ap_09 VALUES LESS THAN (10) 
,SUBPARTITION ap_10 VALUES LESS THAN (11) 
,SUBPARTITION ap_11 VALUES LESS THAN (12) 
,SUBPARTITION ap_12 VALUES LESS THAN (13) 
,SUBPARTITION ap_cf VALUES LESS THAN (MAXVALUE))
 (PARTITION ledger_2018 VALUES LESS THAN (2019) PCTFREE 0 COMPRESS
,PARTITION ledger_2019 VALUES LESS THAN (2020) PCTFREE 0 COMPRESS
,PARTITION ledger_2020 VALUES LESS THAN (2021) 
,PARTITION ledger_2021 VALUES LESS THAN (2022)
) ENABLE ROW MOVEMENT NOLOGGING
/
@treeselectors
@popledger
I will create two materialized views, one for 2019 and one for 2020. I will only range partition the MV on accounting period because each contains only a single fiscal year.
CREATE MATERIALIZED VIEW mv_ledger_2019
PARTITION BY RANGE (ACCOUNTING_PERIOD)
(PARTITION ap_bf VALUES LESS THAN (1) 
,PARTITION ap_01 VALUES LESS THAN (2) 
,PARTITION ap_02 VALUES LESS THAN (3) 
,PARTITION ap_03 VALUES LESS THAN (4) 
,PARTITION ap_04 VALUES LESS THAN (5) 
,PARTITION ap_05 VALUES LESS THAN (6) 
,PARTITION ap_06 VALUES LESS THAN (7) 
,PARTITION ap_07 VALUES LESS THAN (8) 
,PARTITION ap_08 VALUES LESS THAN (9) 
,PARTITION ap_09 VALUES LESS THAN (10) 
,PARTITION ap_10 VALUES LESS THAN (11) 
,PARTITION ap_11 VALUES LESS THAN (12) 
,PARTITION ap_12 VALUES LESS THAN (13) 
,PARTITION ap_cf VALUES LESS THAN (MAXVALUE)
) PCTFREE 0 COMPRESS
REFRESH COMPLETE ON DEMAND
ENABLE QUERY REWRITE AS
SELECT business_unit, account, chartfield1, fiscal_year, accounting_period,
sum(posted_total_amt) posted_total_amt
FROM ps_ledger
WHERE fiscal_year = 2019
AND   ledger = 'ACTUALS'
AND   currency_cd = 'GBP'
GROUP BY business_unit, account, chartfield1, fiscal_year, accounting_period
/

CREATE MATERIALIZED VIEW mv_ledger_2020
PARTITION BY RANGE (ACCOUNTING_PERIOD)
(PARTITION ap_bf VALUES LESS THAN (1) 
,PARTITION ap_01 VALUES LESS THAN (2) 
,PARTITION ap_02 VALUES LESS THAN (3) 
,PARTITION ap_03 VALUES LESS THAN (4) 
,PARTITION ap_04 VALUES LESS THAN (5) 
,PARTITION ap_05 VALUES LESS THAN (6) 
,PARTITION ap_06 VALUES LESS THAN (7) 
,PARTITION ap_07 VALUES LESS THAN (8) 
,PARTITION ap_08 VALUES LESS THAN (9) 
,PARTITION ap_09 VALUES LESS THAN (10) 
,PARTITION ap_10 VALUES LESS THAN (11) 
,PARTITION ap_11 VALUES LESS THAN (12) 
,PARTITION ap_12 VALUES LESS THAN (13) 
,PARTITION ap_cf VALUES LESS THAN (MAXVALUE)
) PCTFREE 0 COMPRESS PARALLEL
REFRESH COMPLETE ON DEMAND
ENABLE QUERY REWRITE AS
SELECT business_unit, account, chartfield1, fiscal_year, accounting_period,
sum(posted_total_amt) posted_total_amt
FROM ps_ledger
WHERE fiscal_year = 2020
AND   ledger = 'ACTUALS'
AND   currency_cd = 'GBP'
GROUP BY business_unit, account, chartfield1, fiscal_year, accounting_period
/
@mvpop
USER_MVIEW_DETAIL_RELATIONS reports that PCT does apply to these materialized views. USER_MVIEW_DETAIL_SUBPARTITION correctly identified the one stale sub-partition into which new data was added is stale, but in both materialised views, even though we can see it is not needed by MV_LEDGER_2019.
@pop2020m7.sql
23:57:09 SQL> SELECT MVIEW_NAME, STALENESS, LAST_REFRESH_TYPE, COMPILE_STATE FROM USER_MVIEWS ORDER BY MVIEW_NAME;

MVIEW_NAME      STALENESS           LAST_REF COMPILE_STATE
--------------- ------------------- -------- -------------------
MV_LEDGER_2019  NEEDS_COMPILE       COMPLETE NEEDS_COMPILE
MV_LEDGER_2020  NEEDS_COMPILE       COMPLETE NEEDS_COMPILE

Elapsed: 00:00:00.00
23:57:09 SQL> select * from user_mview_detail_relations;

                           Detailobj
OWNER      MVIEW_NAME      Owner      DETAILOBJ_NAME  DETAILOBJ DETAILOBJ_ALIAS      D NUM_FRESH_PCT_PARTITIONS NUM_STALE_PCT_PARTITIONS
---------- --------------- ---------- --------------- --------- -------------------- - ------------------------ ------------------------
SCOTT      MV_LEDGER_2019  SCOTT      PS_LEDGER       TABLE     PS_LEDGER            Y                       55                        1
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       TABLE     PS_LEDGER            Y                       55                        1

Elapsed: 00:00:13.46
23:57:23 SQL> select * from user_mview_detail_partition;

no rows selected

Elapsed: 00:00:00.00
23:57:23 SQL> select * from user_mview_detail_subpartition where freshness != 'FRESH';

                           Detailobj
OWNER      MVIEW_NAME      Owner      DETAILOBJ_NAME  DETAIL_PARTITION_NAM DETAIL_SUBPARTITION_ DETAIL_SUBPARTITION_POSITION FRESH
---------- --------------- ---------- --------------- -------------------- -------------------- ---------------------------- -----
SCOTT      MV_LEDGER_2019  SCOTT      PS_LEDGER       LEDGER_2020          LEDGER_2020_AP_07                               8 STALE
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       LEDGER_2020          LEDGER_2020_AP_07                               8 STALE
Query on 2019 continues to be rewritten to use MV_LEDGER_2019 even though the MV needs compilation.
Plan hash value: 1498194812
----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |                   |  1703 |   128K|   421   (2)| 00:00:01 |       |       |
|   1 |  HASH GROUP BY                   |                   |  1703 |   128K|   421   (2)| 00:00:01 |       |       |
|*  2 |   HASH JOIN                      |                   |  1703 |   128K|   420   (2)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN              | PS_PSTREESELECT10 |   238 |  4522 |     2   (0)| 00:00:01 |       |       |
|*  4 |    HASH JOIN                     |                   |  7156 |   405K|   418   (2)| 00:00:01 |       |       |
|   5 |     MERGE JOIN CARTESIAN         |                   |   208 |  6032 |     3   (0)| 00:00:01 |       |       |
|*  6 |      INDEX RANGE SCAN            | PS_PSTREESELECT05 |     2 |    22 |     1   (0)| 00:00:01 |       |       |
|   7 |      BUFFER SORT                 |                   |   104 |  1872 |     2   (0)| 00:00:01 |       |       |
|*  8 |       INDEX RANGE SCAN           | PS_PSTREESELECT10 |   104 |  1872 |     1   (0)| 00:00:01 |       |       |
|   9 |     PARTITION RANGE ITERATOR     |                   | 68804 |  1948K|   415   (2)| 00:00:01 |     2 |     7 |
|* 10 |      MAT_VIEW REWRITE ACCESS FULL| MV_LEDGER_2019    | 68804 |  1948K|   415   (2)| 00:00:01 |     2 |     7 |
----------------------------------------------------------------------------------------------------------------------
….
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("MV_LEDGER_2019"."ACCOUNT"="L2"."RANGE_FROM_10")
   3 - access("L2"."SELECTOR_NUM"=30984)
   4 - access("MV_LEDGER_2019"."BUSINESS_UNIT"="L1"."RANGE_FROM_05" AND
              "MV_LEDGER_2019"."CHARTFIELD1"="L"."RANGE_FROM_10")
   6 - access("L1"."SELECTOR_NUM"=30982)
   8 - access("L"."SELECTOR_NUM"=30985)
  10 - filter("MV_LEDGER_2019"."ACCOUNTING_PERIOD"<=6)
Queries on periods 1-6 in 2020 also get rewritten
Plan hash value: 3016493666
------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name              | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |                   | 12328 |   927K|       |   653   (2)| 00:00:01 |       |       |
|   1 |  HASH GROUP BY                   |                   | 12328 |   927K|  1080K|   653   (2)| 00:00:01 |       |       |
|*  2 |   HASH JOIN                      |                   | 12328 |   927K|       |   429   (2)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN              | PS_PSTREESELECT10 |   238 |  4522 |       |     2   (0)| 00:00:01 |       |       |
|*  4 |    HASH JOIN                     |                   | 51748 |  2931K|       |   427   (2)| 00:00:01 |       |       |
|   5 |     MERGE JOIN CARTESIAN         |                   |   208 |  6032 |       |     3   (0)| 00:00:01 |       |       |
|*  6 |      INDEX RANGE SCAN            | PS_PSTREESELECT05 |     2 |    22 |       |     1   (0)| 00:00:01 |       |       |
|   7 |      BUFFER SORT                 |                   |   104 |  1872 |       |     2   (0)| 00:00:01 |       |       |
|*  8 |       INDEX RANGE SCAN           | PS_PSTREESELECT10 |   104 |  1872 |       |     1   (0)| 00:00:01 |       |       |
|   9 |     PARTITION RANGE ITERATOR     |                   |   496K|    13M|       |   423   (2)| 00:00:01 |     2 |     7 |
|* 10 |      MAT_VIEW REWRITE ACCESS FULL| MV_LEDGER_2020    |   496K|    13M|       |   423   (2)| 00:00:01 |     2 |     7 |
------------------------------------------------------------------------------------------------------------------------------
…
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("MV_LEDGER_2020"."ACCOUNT"="L2"."RANGE_FROM_10")
   3 - access("L2"."SELECTOR_NUM"=30984)
   4 - access("MV_LEDGER_2020"."BUSINESS_UNIT"="L1"."RANGE_FROM_05" AND
              "MV_LEDGER_2020"."CHARTFIELD1"="L"."RANGE_FROM_10")
   6 - access("L1"."SELECTOR_NUM"=30982)
   8 - access("L"."SELECTOR_NUM"=30985)
  10 - filter("MV_LEDGER_2020"."ACCOUNTING_PERIOD"<=6)
Quite correctly, the query on 2020 period 7 is not rewritten because the underlying partition is stale.
Plan hash value: 1321682226
---------------------------------------------------------------------------------------------------------------------------------------------
|   Id  | Operation                                             | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------------------------------------
|     0 | SELECT STATEMENT                                      |                   |     1 |    92 |     7  (15)| 00:00:01 |       |       |
|     1 |  HASH GROUP BY                                        |                   |     1 |    92 |     7  (15)| 00:00:01 |       |       |
|- *  2 |   HASH JOIN                                           |                   |     1 |    92 |     6   (0)| 00:00:01 |       |       |
|     3 |    NESTED LOOPS                                       |                   |     1 |    92 |     6   (0)| 00:00:01 |       |       |
|-    4 |     STATISTICS COLLECTOR                              |                   |       |       |            |          |       |       |
|- *  5 |      HASH JOIN                                        |                   |     1 |    73 |     5   (0)| 00:00:01 |       |       |
|     6 |       NESTED LOOPS                                    |                   |     1 |    73 |     5   (0)| 00:00:01 |       |       |
|-    7 |        STATISTICS COLLECTOR                           |                   |       |       |            |          |       |       |
|- *  8 |         HASH JOIN                                     |                   |     1 |    55 |     4   (0)| 00:00:01 |       |       |
|     9 |          NESTED LOOPS                                 |                   |     1 |    55 |     4   (0)| 00:00:01 |       |       |
|-   10 |           STATISTICS COLLECTOR                        |                   |       |       |            |          |       |       |
|    11 |            PARTITION RANGE SINGLE                     |                   |     1 |    44 |     3   (0)| 00:00:01 |     3 |     3 |
|    12 |             PARTITION RANGE SINGLE                    |                   |     1 |    44 |     3   (0)| 00:00:01 |     8 |     8 |
|  * 13 |              TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PS_LEDGER         |     1 |    44 |     3   (0)| 00:00:01 |    36 |    36 |
|  * 14 |               INDEX RANGE SCAN                        | PSXLEDGER         |     1 |       |     2   (0)| 00:00:01 |    36 |    36 |
|  * 15 |           INDEX RANGE SCAN                            | PS_PSTREESELECT05 |     1 |    11 |     1   (0)| 00:00:01 |       |       |
|- * 16 |          INDEX RANGE SCAN                             | PS_PSTREESELECT05 |     1 |    11 |     1   (0)| 00:00:01 |       |       |
|  * 17 |        INDEX RANGE SCAN                               | PS_PSTREESELECT10 |     1 |    18 |     1   (0)| 00:00:01 |       |       |
|- * 18 |       INDEX RANGE SCAN                                | PS_PSTREESELECT10 |     1 |    18 |     1   (0)| 00:00:01 |       |       |
|  * 19 |     INDEX RANGE SCAN                                  | PS_PSTREESELECT10 |     1 |    19 |     1   (0)| 00:00:01 |       |       |
|- * 20 |    INDEX RANGE SCAN                                   | PS_PSTREESELECT10 |     1 |    19 |     1   (0)| 00:00:01 |       |       |
---------------------------------------------------------------------------------------------------------------------------------------------
…
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("A"."ACCOUNT"="L2"."RANGE_FROM_10")
   5 - access("A"."CHARTFIELD1"="L"."RANGE_FROM_10")
   8 - access("A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05")
  13 - filter("A"."CURRENCY_CD"='GBP')
  14 - access("A"."LEDGER"='ACTUALS' AND "A"."FISCAL_YEAR"=2020 AND "A"."ACCOUNTING_PERIOD"=7)
  15 - access("L1"."SELECTOR_NUM"=30982 AND "A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05")
       filter("A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05")
  16 - access("L1"."SELECTOR_NUM"=30982)
  17 - access("L"."SELECTOR_NUM"=30985 AND "A"."CHARTFIELD1"="L"."RANGE_FROM_10")
       filter("A"."CHARTFIELD1"="L"."RANGE_FROM_10")
  18 - access("L"."SELECTOR_NUM"=30985)
  19 - access("L2"."SELECTOR_NUM"=30984 AND "A"."ACCOUNT"="L2"."RANGE_FROM_10")
       filter("A"."ACCOUNT"="L2"."RANGE_FROM_10")
  20 - access("L2"."SELECTOR_NUM"=30984)
Both MVs are compressed after the initial creation. Note the sizes of the partitions for fiscal year 2020; about 256 blocks, and 284 rows per block
                                          Sub-                                             Rows
                Part                      Part                                              per
TABLE_NAME       Pos PARTITION_NAME        Pos SUBPARTITION_NAME         NUM_ROWS BLOCKS  Block COMPRESS COMPRESS_FOR
--------------- ---- -------------------- ---- ------------------------- -------- ------ ------ -------- ----------------
MV_LEDGER_2019     1 AP_BF                                                  72886    252  289.2 ENABLED  BASIC
                   2 AP_01                                                  72925    252  289.4 ENABLED  BASIC
                   3 AP_02                                                  72736    251  289.8 ENABLED  BASIC
                   4 AP_03                                                  72745    251  289.8 ENABLED  BASIC
                   5 AP_04                                                  72649    251  289.4 ENABLED  BASIC
                   6 AP_05                                                  71947    249  288.9 ENABLED  BASIC
                   7 AP_06                                                  72903    252  289.3 ENABLED  BASIC
                   8 AP_07                                                  72510    250  290.0 ENABLED  BASIC
                   9 AP_08                                                  72520    251  288.9 ENABLED  BASIC
                  10 AP_09                                                  72965    252  289.5 ENABLED  BASIC
                  11 AP_10                                                  72209    250  288.8 ENABLED  BASIC
                  12 AP_11                                                  72647    251  289.4 ENABLED  BASIC
                  13 AP_12                                                  73121    253  289.0 ENABLED  BASIC
                  14 AP_CF                                                   1999     25   80.0 ENABLED  BASIC
                                                                           946762   3290  287.8

MV_LEDGER_2020     1 AP_BF                                                  72475    256  283.1 ENABLED  BASIC
                   2 AP_01                                                  72981    256  285.1 ENABLED  BASIC
                   3 AP_02                                                  72726    256  284.1 ENABLED  BASIC
                   4 AP_03                                                  72844    256  284.5 ENABLED  BASIC
                   5 AP_04                                                  72709    256  284.0 ENABLED  BASIC
                   6 AP_05                                                  72535    256  283.3 ENABLED  BASIC
                   7 AP_06                                                  72419    256  282.9 ENABLED  BASIC
                   8 AP_07                                                      0      0        ENABLED  BASIC
                   9 AP_08                                                      0      0        ENABLED  BASIC
                  10 AP_09                                                      0      0        ENABLED  BASIC
                  11 AP_10                                                      0      0        ENABLED  BASIC
                  12 AP_11                                                      0      0        ENABLED  BASIC
                  13 AP_12                                                      0      0        ENABLED  BASIC
                  14 AP_CF                                                      0      0        ENABLED  BASIC
                                                                           508689   1792  283.9
Let's look at the trace of the refresh processes. Both materialised views were marked as NEEDS_COMPILE, so both were refreshed. However, the trace shows that the refresh has changed from truncate to delete and the insert is not done in direct path mode. The refresh of MV_LEDGER_2019 didn't actually change any data because both refreshes tried to process 2020 because a 2020 subpartition had been changed. No data was deleted, and none was inserted.
…
/* MV_REFRESH (DEL) */ DELETE FROM "SCOTT"."MV_LEDGER_2019" WHERE  ( ( ( (2020 <= "FISCAL_YEAR" AND "FISCAL_YEAR" < 2021)  )) )
…
/* MV_REFRESH (INS) */ INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO "SCOTT"."MV_LEDGER_2019" ("BUSINESS_UNIT", "ACCOUNT", "CHARTFIELD1", 
"FISCAL_YEAR", "ACCOUNTING_PERIOD", "POSTED_TOTAL_AMT") SELECT  /*+ X_DYN_PRUNE */ "PS_LEDGER"."BUSINESS_UNIT" , "PS_LEDGER"."ACCOUNT" , 
"PS_LEDGER"."CHARTFIELD1" , "PS_LEDGER"."FISCAL_YEAR" , "PS_LEDGER"."ACCOUNTING_PERIOD" , SUM("PS_LEDGER"."POSTED_TOTAL_AMT")  
FROM "PS_LEDGER" "PS_LEDGER" WHERE ("PS_LEDGER"."FISCAL_YEAR"=2019 AND "PS_LEDGER"."LEDGER"='ACTUALS' AND "PS_LEDGER"."CURRENCY_CD"='GBP') 
AND  ( ( ( (2020 <= "PS_LEDGER"."FISCAL_YEAR" AND "PS_LEDGER"."FISCAL_YEAR" < 2021) ) )  )GROUP BY 
"PS_LEDGER"."BUSINESS_UNIT","PS_LEDGER"."ACCOUNT","PS_LEDGER"."CHARTFIELD1","PS_LEDGER"."FISCAL_YEAR","PS_LEDGER"."ACCOUNTING_PERIOD"
…

…
/* MV_REFRESH (DEL) */ DELETE FROM "SCOTT"."MV_LEDGER_2020" WHERE  ( ( ( (2020 <= "FISCAL_YEAR" AND "FISCAL_YEAR" < 2021)  )) )
…
/* MV_REFRESH (INS) */ INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO "SCOTT"."MV_LEDGER_2020" ("BUSINESS_UNIT", "ACCOUNT", "CHARTFIELD1", 
"FISCAL_YEAR", "ACCOUNTING_PERIOD", "POSTED_TOTAL_AMT") SELECT  /*+ X_DYN_PRUNE */ "PS_LEDGER"."BUSINESS_UNIT" , "PS_LEDGER"."ACCOUNT" , 
"PS_LEDGER"."CHARTFIELD1" , "PS_LEDGER"."FISCAL_YEAR" , "PS_LEDGER"."ACCOUNTING_PERIOD" , SUM("PS_LEDGER"."POSTED_TOTAL_AMT")  
FROM "PS_LEDGER" "PS_LEDGER" WHERE ("PS_LEDGER"."FISCAL_YEAR"=2020 AND "PS_LEDGER"."LEDGER"='ACTUALS' AND "PS_LEDGER"."CURRENCY_CD"='GBP') 
AND  ( ( ( (2020 <= "PS_LEDGER"."FISCAL_YEAR" AND "PS_LEDGER"."FISCAL_YEAR" < 2021) ) )  )GROUP BY 
"PS_LEDGER"."BUSINESS_UNIT","PS_LEDGER"."ACCOUNT","PS_LEDGER"."CHARTFIELD1","PS_LEDGER"."FISCAL_YEAR","PS_LEDGER"."ACCOUNTING_PERIOD"
…
However, the 2020 materialized view has gone from 256 blocks per period to 384 blocks, and from 285 to 189 rows per block because the data is no longer compressed because it was not inserted in direct path mode, although there was still a commit between the delete and insert statements.
                                          Sub-                                             Rows
                Part                      Part                                              per
TABLE_NAME       Pos PARTITION_NAME        Pos SUBPARTITION_NAME         NUM_ROWS BLOCKS  Block COMPRESS COMPRESS_FOR
--------------- ---- -------------------- ---- ------------------------- -------- ------ ------ -------- ------------------------------
MV_LEDGER_2019     1 AP_BF                                                  72886    252  289.2 ENABLED  BASIC
                   2 AP_01                                                  72925    252  289.4 ENABLED  BASIC
                   3 AP_02                                                  72736    251  289.8 ENABLED  BASIC
                   4 AP_03                                                  72745    251  289.8 ENABLED  BASIC
                   5 AP_04                                                  72649    251  289.4 ENABLED  BASIC
                   6 AP_05                                                  71947    249  288.9 ENABLED  BASIC
                   7 AP_06                                                  72903    252  289.3 ENABLED  BASIC
                   8 AP_07                                                  72510    250  290.0 ENABLED  BASIC
                   9 AP_08                                                  72520    251  288.9 ENABLED  BASIC
                  10 AP_09                                                  72965    252  289.5 ENABLED  BASIC
                  11 AP_10                                                  72209    250  288.8 ENABLED  BASIC
                  12 AP_11                                                  72647    251  289.4 ENABLED  BASIC
                  13 AP_12                                                  73121    253  289.0 ENABLED  BASIC
                  14 AP_CF                                                   1999     25   80.0 ENABLED  BASIC
                                                                           946762   3290  287.8

MV_LEDGER_2020     1 AP_BF                                                  72475    384  188.7 ENABLED  BASIC
                   2 AP_01                                                  72981    384  190.1 ENABLED  BASIC
                   3 AP_02                                                  72726    384  189.4 ENABLED  BASIC
                   4 AP_03                                                  72844    384  189.7 ENABLED  BASIC
                   5 AP_04                                                  72709    384  189.3 ENABLED  BASIC
                   6 AP_05                                                  72535    384  188.9 ENABLED  BASIC
                   7 AP_06                                                  72419    384  188.6 ENABLED  BASIC
                   8 AP_07                                                  72795   1006   72.4 ENABLED  BASIC
                   9 AP_08                                                      0      0        ENABLED  BASIC
                  10 AP_09                                                      0      0        ENABLED  BASIC
                  11 AP_10                                                      0      0        ENABLED  BASIC
                  12 AP_11                                                      0      0        ENABLED  BASIC
                  13 AP_12                                                      0      0        ENABLED  BASIC
                  14 AP_CF                                                      0      0        ENABLED  BASIC
                                                                           581484   3694  157.4
MV_CAPABILITIES reports PCT is available, and it is. It correctly identified stale partitions that prevent rewrite.
MVNAME                         CAPABILITY_NAME                P REL_TEXT             MSGTXT
------------------------------ ------------------------------ - -------------------- ------------------------------------------------------------
MV_LEDGER_2019                 PCT                            Y
                               REFRESH_COMPLETE               Y
                               REFRESH_FAST                   Y
                               REWRITE                        Y
                               PCT_TABLE                      Y PS_LEDGER
                               REFRESH_FAST_AFTER_INSERT      N SCOTT.PS_LEDGER      the detail table does not have a materialized view log
                               REFRESH_FAST_AFTER_ONETAB_DML  N POSTED_TOTAL_AMT     SUM(expr) without COUNT(expr)
                               REFRESH_FAST_AFTER_ONETAB_DML  N                      see the reason why REFRESH_FAST_AFTER_INSERT is disabled
                               REFRESH_FAST_AFTER_ONETAB_DML  N                      COUNT(*) is not present in the select list
                               REFRESH_FAST_AFTER_ONETAB_DML  N                      SUM(expr) without COUNT(expr)
                               REFRESH_FAST_AFTER_ANY_DML     N                      see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
                               REFRESH_FAST_PCT               Y
                               REWRITE_FULL_TEXT_MATCH        Y
                               REWRITE_PARTIAL_TEXT_MATCH     Y
                               REWRITE_GENERAL                Y
                               REWRITE_PCT                    Y
                               PCT_TABLE_REWRITE              Y PS_LEDGER

MV_LEDGER_2020                 PCT                            Y
                               REFRESH_COMPLETE               Y
                               REFRESH_FAST                   Y
                               REWRITE                        Y
                               PCT_TABLE                      Y PS_LEDGER
                               REFRESH_FAST_AFTER_INSERT      N SCOTT.PS_LEDGER      the detail table does not have a materialized view log
                               REFRESH_FAST_AFTER_ONETAB_DML  N POSTED_TOTAL_AMT     SUM(expr) without COUNT(expr)
                               REFRESH_FAST_AFTER_ONETAB_DML  N                      see the reason why REFRESH_FAST_AFTER_INSERT is disabled
                               REFRESH_FAST_AFTER_ONETAB_DML  N                      COUNT(*) is not present in the select list
                               REFRESH_FAST_AFTER_ONETAB_DML  N                      SUM(expr) without COUNT(expr)
                               REFRESH_FAST_AFTER_ANY_DML     N                      see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
                               REFRESH_FAST_PCT               Y
                               REWRITE_FULL_TEXT_MATCH        Y
                               REWRITE_PARTIAL_TEXT_MATCH     Y
                               REWRITE_GENERAL                Y
                               REWRITE_PCT                    Y
                               PCT_TABLE_REWRITE              Y PS_LEDGER
Mismatching partitioning caused non-atomic refresh to go back to atomic mode and so the data was no longer compressed.  

 

Demonstration 7: Partition on Accounting Period, Subpartition on Fiscal Year!

This final example still composite partitions the ledger table, but now I will swap the partitioning and sub-partitioning. I will range partition on ACCOUNTING PERIOD into 14 partitions per fiscal year and will subpartition on FISCAL_YEAR. The intention is to demonstrate that the partition elimination will still work correctly and that I will only have to refresh a single accounting period. 
However, you will see that there are some problems, and I can't work around all of them. 
I will use a template so that each accounting period partition will have the same fiscal year subpartitions.
I will still only range partition the MV on accounting period. We don't need to partition it on FISCAL_YEAR since it only contains a single year.
CREATE TABLE ps_ledger
(business_unit VARCHAR2(5) NOT NULL
…
) PCTFREE 10 PCTUSED 80
PARTITION BY RANGE (ACCOUNTING_PERIOD)
SUBPARTITION BY RANGE (FISCAL_YEAR) 
SUBPARTITION TEMPLATE
(SUBPARTITION ledger_2018 VALUES LESS THAN (2019) 
,SUBPARTITION ledger_2019 VALUES LESS THAN (2020)
,SUBPARTITION ledger_2020 VALUES LESS THAN (2021) 
,SUBPARTITION ledger_2021 VALUES LESS THAN (2022))
(PARTITION ap_bf VALUES LESS THAN (1) 
,PARTITION ap_01 VALUES LESS THAN (2) 
,PARTITION ap_02 VALUES LESS THAN (3) 
,PARTITION ap_03 VALUES LESS THAN (4) 
,PARTITION ap_04 VALUES LESS THAN (5) 
,PARTITION ap_05 VALUES LESS THAN (6) 
,PARTITION ap_06 VALUES LESS THAN (7) 
,PARTITION ap_07 VALUES LESS THAN (8) 
,PARTITION ap_08 VALUES LESS THAN (9) 
,PARTITION ap_09 VALUES LESS THAN (10) 
,PARTITION ap_10 VALUES LESS THAN (11) 
,PARTITION ap_11 VALUES LESS THAN (12) 
,PARTITION ap_12 VALUES LESS THAN (13) 
,PARTITION ap_cf VALUES LESS THAN (MAXVALUE))
ENABLE ROW MOVEMENT NOLOGGING
/
I can't specify physical attributes on subpartitions, only partitions. So I have to come along afterwards and alter the sub-partitions. I am going to do that before I populate the data so it is compressed on load rather than load it and rebuild it afterwards.
set serveroutput on 
DECLARE
 l_sql CLOB;
BEGIN
 FOR i IN (
  select * 
  from user_tab_subpartitions
  where table_name = 'PS_LEDGER'
  and subpartition_name like 'AP%LEDGER%201%'
  and (compression = 'DISABLED' OR pct_free>0)
  order by table_name, partition_position, subpartition_position
 ) LOOP
  l_sql := 'ALTER TABLE '||i.table_name||' MOVE SUBPARTITION '||i.subpartition_name||' COMPRESS UPDATE INDEXES';
  dbms_output.put_line(l_sql);
  EXECUTE IMMEDIATE l_sql;
 END LOOP;
END;
/
@treeselectors
@popledger
                                          Sub-                                             Rows
                Part                      Part                                              per
TABLE_NAME       Pos PARTITION_NAME        Pos SUBPARTITION_NAME         NUM_ROWS BLOCKS  Block COMPRESS COMPRESS_FOR
--------------- ---- -------------------- ---- ------------------------- -------- ------ ------ -------- ------------
PS_LEDGER          1 AP_BF                                                 261458   5147   50.8 NONE
                   1                         1 AP_BF_LEDGER_2018            84565   1372   61.6 ENABLED  BASIC
                   1                         2 AP_BF_LEDGER_2019            84519   1371   61.6 ENABLED  BASIC
                   1                         3 AP_BF_LEDGER_2020            84673   2193   38.6 DISABLED
                   1                         4 AP_BF_LEDGER_2021             7701    211   36.5 DISABLED
                   2 AP_01                                                 261108   5174   50.5 NONE
                   2                         1 AP_01_LEDGER_2018            84268   1368   61.6 ENABLED  BASIC
                   2                         2 AP_01_LEDGER_2019            84233   1366   61.7 ENABLED  BASIC
                   2                         3 AP_01_LEDGER_2020            84831   2224   38.1 DISABLED
                   2                         4 AP_01_LEDGER_2021             7776    216   36.0 DISABLED
                   3 AP_02                                                 261174   5172   50.5 NONE
                   3                         1 AP_02_LEDGER_2018            84372   1369   61.6 ENABLED  BASIC
                   3                         2 AP_02_LEDGER_2019            84444   1370   61.6 ENABLED  BASIC
                   3                         3 AP_02_LEDGER_2020            84596   2218   38.1 DISABLED
                   3                         4 AP_02_LEDGER_2021             7762    215   36.1 DISABLED
                   4 AP_03                                                 259982   5149   50.5 NONE
                   4                         1 AP_03_LEDGER_2018            84105   1364   61.7 ENABLED  BASIC
                   4                         2 AP_03_LEDGER_2019            83820   1360   61.6 ENABLED  BASIC
                   4                         3 AP_03_LEDGER_2020            84284   2210   38.1 DISABLED
                   4                         4 AP_03_LEDGER_2021             7773    215   36.2 DISABLED
                   5 AP_04                                                 261376   5177   50.5 NONE
                   5                         1 AP_04_LEDGER_2018            84378   1369   61.6 ENABLED  BASIC
                   5                         2 AP_04_LEDGER_2019            84649   1374   61.6 ENABLED  BASIC
                   5                         3 AP_04_LEDGER_2020            84652   2220   38.1 DISABLED
                   5                         4 AP_04_LEDGER_2021             7697    214   36.0 DISABLED
                   6 AP_05                                                 261772   5180   50.5 NONE
                   6                         1 AP_05_LEDGER_2018            84984   1378   61.7 ENABLED  BASIC
                   6                         2 AP_05_LEDGER_2019            84656   1374   61.6 ENABLED  BASIC
                   6                         3 AP_05_LEDGER_2020            84507   2216   38.1 DISABLED
                   6                         4 AP_05_LEDGER_2021             7625    212   36.0 DISABLED
                   7 AP_06                                                 260581   5165   50.5 NONE
                   7                         1 AP_06_LEDGER_2018            83994   1363   61.6 ENABLED  BASIC
                   7                         2 AP_06_LEDGER_2019            84150   1366   61.6 ENABLED  BASIC
                   7                         3 AP_06_LEDGER_2020            84729   2222   38.1 DISABLED
                   7                         4 AP_06_LEDGER_2021             7708    214   36.0 DISABLED
                   8 AP_07                                                 184118   3163   58.2 NONE
                   8                         1 AP_07_LEDGER_2018            84863   1377   61.6 ENABLED  BASIC
                   8                         2 AP_07_LEDGER_2019            84155   1366   61.6 ENABLED  BASIC
                   8                         3 AP_07_LEDGER_2020             7587    211   36.0 DISABLED
                   8                         4 AP_07_LEDGER_2021             7513    209   35.9 DISABLED
                   9 AP_08                                                 184619   3173   58.2 NONE
                   9                         1 AP_08_LEDGER_2018            84547   1372   61.6 ENABLED  BASIC
                   9                         2 AP_08_LEDGER_2019            84775   1376   61.6 ENABLED  BASIC
                   9                         3 AP_08_LEDGER_2020             7662    213   36.0 DISABLED
                   9                         4 AP_08_LEDGER_2021             7635    212   36.0 DISABLED
                  10 AP_09                                                 184375   3168   58.2 NONE
                  10                         1 AP_09_LEDGER_2018            84407   1370   61.6 ENABLED  BASIC
                  10                         2 AP_09_LEDGER_2019            84645   1373   61.6 ENABLED  BASIC
                  10                         3 AP_09_LEDGER_2020             7570    210   36.0 DISABLED
                  10                         4 AP_09_LEDGER_2021             7753    215   36.1 DISABLED
                  11 AP_10                                                 184327   3166   58.2 NONE
                  11                         1 AP_10_LEDGER_2018            84300   1368   61.6 ENABLED  BASIC
                  11                         2 AP_10_LEDGER_2019            84738   1374   61.7 ENABLED  BASIC
                  11                         3 AP_10_LEDGER_2020             7656    212   36.1 DISABLED
                  11                         4 AP_10_LEDGER_2021             7633    212   36.0 DISABLED
                  12 AP_11                                                 184489   3167   58.3 NONE
                  12                         1 AP_11_LEDGER_2018            84406   1369   61.7 ENABLED  BASIC
                  12                         2 AP_11_LEDGER_2019            84861   1376   61.7 ENABLED  BASIC
                  12                         3 AP_11_LEDGER_2020             7700    213   36.2 DISABLED
                  12                         4 AP_11_LEDGER_2021             7522    209   36.0 DISABLED
                  13 AP_12                                                 184244   3168   58.2 NONE
                  13                         1 AP_12_LEDGER_2018            84611   1373   61.6 ENABLED  BASIC
                  13                         2 AP_12_LEDGER_2019            84155   1365   61.7 ENABLED  BASIC
                  13                         3 AP_12_LEDGER_2020             7776    216   36.0 DISABLED
                  13                         4 AP_12_LEDGER_2021             7702    214   36.0 DISABLED
                  14 AP_CF                                                   4800    154   31.2 NONE
                  14                         1 AP_CF_LEDGER_2018             2200     53   41.5 ENABLED  BASIC
                  14                         2 AP_CF_LEDGER_2019             2200     53   41.5 ENABLED  BASIC
                  14                         3 AP_CF_LEDGER_2020              200     24    8.3 DISABLED
                  14                         4 AP_CF_LEDGER_2021              200     24    8.3 DISABLED
                                                                          2938423  55323   53.1
If I query periods 1-6 in 2018 I get correct partition elimination. Oracle inspects 6 partitions, 1 sub-partition on each. So swapping the composite partitioning types and columns should not affect performance.
Plan hash value: 2690363151
-----------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                   |   717 | 66681 |  2244   (1)| 00:00:01 |       |       |
|   1 |  HASH GROUP BY              |                   |   717 | 66681 |  2244   (1)| 00:00:01 |       |       |
|*  2 |   HASH JOIN                 |                   |   717 | 66681 |  2243   (1)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN         | PS_PSTREESELECT10 |   258 |  4902 |     2   (0)| 00:00:01 |       |       |
|*  4 |    HASH JOIN                |                   |  2776 |   200K|  2241   (1)| 00:00:01 |       |       |
|   5 |     MERGE JOIN CARTESIAN    |                   |   208 |  6032 |     3   (0)| 00:00:01 |       |       |
|*  6 |      INDEX RANGE SCAN       | PS_PSTREESELECT05 |     2 |    22 |     1   (0)| 00:00:01 |       |       |
|   7 |      BUFFER SORT            |                   |   104 |  1872 |     2   (0)| 00:00:01 |       |       |
|*  8 |       INDEX RANGE SCAN      | PS_PSTREESELECT10 |   104 |  1872 |     1   (0)| 00:00:01 |       |       |
|   9 |     PARTITION RANGE ITERATOR|                   | 26693 |  1173K|  2238   (1)| 00:00:01 |     2 |     7 |
|  10 |      PARTITION RANGE SINGLE |                   | 26693 |  1173K|  2238   (1)| 00:00:01 |     1 |     1 |
|* 11 |       TABLE ACCESS FULL     | PS_LEDGER         | 26693 |  1173K|  2238   (1)| 00:00:01 |       |       |
-----------------------------------------------------------------------------------------------------------------
…
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("A"."ACCOUNT"="L2"."RANGE_FROM_10")
   3 - access("L2"."SELECTOR_NUM"=30984)
   4 - access("A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05" AND "A"."CHARTFIELD1"="L"."RANGE_FROM_10")
   6 - access("L1"."SELECTOR_NUM"=30982)
   8 - access("L"."SELECTOR_NUM"=30985)
  11 - filter("A"."ACCOUNTING_PERIOD"<=6 AND "A"."FISCAL_YEAR"=2018 AND "A"."LEDGER"='ACTUALS' AND
              "A"."CURRENCY_CD"='GBP')
CREATE MATERIALIZED VIEW mv_ledger_2019
PARTITION BY RANGE (ACCOUNTING_PERIOD)
(PARTITION ap_bf VALUES LESS THAN (1) 
,PARTITION ap_01 VALUES LESS THAN (2) 
,PARTITION ap_02 VALUES LESS THAN (3) 
,PARTITION ap_03 VALUES LESS THAN (4) 
,PARTITION ap_04 VALUES LESS THAN (5) 
,PARTITION ap_05 VALUES LESS THAN (6) 
,PARTITION ap_06 VALUES LESS THAN (7) 
,PARTITION ap_07 VALUES LESS THAN (8) 
,PARTITION ap_08 VALUES LESS THAN (9) 
,PARTITION ap_09 VALUES LESS THAN (10) 
,PARTITION ap_10 VALUES LESS THAN (11) 
,PARTITION ap_11 VALUES LESS THAN (12) 
,PARTITION ap_12 VALUES LESS THAN (13) 
,PARTITION ap_cf VALUES LESS THAN (MAXVALUE)
) PCTFREE 0 COMPRESS
REFRESH COMPLETE ON DEMAND
ENABLE QUERY REWRITE AS
SELECT business_unit, account, chartfield1, fiscal_year, accounting_period,
sum(posted_total_amt) posted_total_amt
FROM ps_ledger
WHERE fiscal_year = 2019
AND   ledger = 'ACTUALS'
AND   currency_cd = 'GBP'
GROUP BY business_unit, account, chartfield1, fiscal_year, accounting_period
/

CREATE MATERIALIZED VIEW mv_ledger_2020
PARTITION BY RANGE (ACCOUNTING_PERIOD)
(PARTITION ap_bf VALUES LESS THAN (1) 
,PARTITION ap_01 VALUES LESS THAN (2) 
,PARTITION ap_02 VALUES LESS THAN (3) 
,PARTITION ap_03 VALUES LESS THAN (4) 
,PARTITION ap_04 VALUES LESS THAN (5) 
,PARTITION ap_05 VALUES LESS THAN (6) 
,PARTITION ap_06 VALUES LESS THAN (7) 
,PARTITION ap_07 VALUES LESS THAN (8) 
,PARTITION ap_08 VALUES LESS THAN (9) 
,PARTITION ap_09 VALUES LESS THAN (10) 
,PARTITION ap_10 VALUES LESS THAN (11) 
,PARTITION ap_11 VALUES LESS THAN (12) 
,PARTITION ap_12 VALUES LESS THAN (13) 
,PARTITION ap_cf VALUES LESS THAN (MAXVALUE)
) PCTFREE 0 COMPRESS PARALLEL
REFRESH COMPLETE ON DEMAND
ENABLE QUERY REWRITE AS
SELECT business_unit, account, chartfield1, fiscal_year, accounting_period,
sum(posted_total_amt) posted_total_amt
FROM ps_ledger
WHERE fiscal_year = 2020
AND   ledger = 'ACTUALS'
AND   currency_cd = 'GBP'
GROUP BY business_unit, account, chartfield1, fiscal_year, accounting_period
/
@mvpop
USER_MVIEW_DETAIL_SUBPARTITION correctly identified the one stale sub-partition, but USER_MVIEW_DETAIL_PARTITION reports that one range partition is stale
@pop2020m7.sql
MVIEW_NAME      STALENESS           LAST_REF COMPILE_STATE
--------------- ------------------- -------- -------------------
MV_LEDGER_2019  NEEDS_COMPILE       COMPLETE NEEDS_COMPILE
MV_LEDGER_2020  NEEDS_COMPILE       COMPLETE NEEDS_COMPILE

01:02:53 SQL> select * from user_mview_detail_relations;

                           Detailobj
OWNER      MVIEW_NAME      Owner      DETAILOBJ_NAME  DETAILOBJ DETAILOBJ_ALIAS      D NUM_FRESH_PCT_PARTITIONS NUM_STALE_PCT_PARTITIONS
---------- --------------- ---------- --------------- --------- -------------------- - ------------------------ ------------------------
SCOTT      MV_LEDGER_2019  SCOTT      PS_LEDGER       TABLE     PS_LEDGER            Y                       55                        1
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       TABLE     PS_LEDGER            Y                       55                        1

01:03:06 SQL> select * from user_mview_detail_subpartition where freshness != 'FRESH';

                           Detailobj
OWNER      MVIEW_NAME      Owner      DETAILOBJ_NAME  DETAIL_PARTITION_NAM DETAIL_SUBPARTITION_ DETAIL_SUBPARTITION_POSITION FRESH
---------- --------------- ---------- --------------- -------------------- -------------------- ---------------------------- -----
SCOTT      MV_LEDGER_2019  SCOTT      PS_LEDGER       AP_07                AP_07_LEDGER_2020                               3 STALE
SCOTT      MV_LEDGER_2020  SCOTT      PS_LEDGER       AP_07                AP_07_LEDGER_2020                               3 STALE
I get query rewrite as you would expect, and as seen in demo 5. Fiscal year 2019, period 7 still rewrites because the partition is not stale
Plan hash value: 387550712
----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |                   |  1967 |   147K|    76   (3)| 00:00:01 |       |       |
|   1 |  HASH GROUP BY                   |                   |  1967 |   147K|    76   (3)| 00:00:01 |       |       |
|*  2 |   HASH JOIN                      |                   |  1967 |   147K|    75   (2)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN              | PS_PSTREESELECT10 |   258 |  4902 |     2   (0)| 00:00:01 |       |       |
|*  4 |    HASH JOIN                     |                   |  7576 |   429K|    73   (2)| 00:00:01 |       |       |
|   5 |     MERGE JOIN CARTESIAN         |                   |   208 |  6032 |     3   (0)| 00:00:01 |       |       |
|*  6 |      INDEX RANGE SCAN            | PS_PSTREESELECT05 |     2 |    22 |     1   (0)| 00:00:01 |       |       |
|   7 |      BUFFER SORT                 |                   |   104 |  1872 |     2   (0)| 00:00:01 |       |       |
|*  8 |       INDEX RANGE SCAN           | PS_PSTREESELECT10 |   104 |  1872 |     1   (0)| 00:00:01 |       |       |
|   9 |     PARTITION RANGE SINGLE       |                   | 72486 |  2052K|    70   (2)| 00:00:01 |     8 |     8 |
|* 10 |      MAT_VIEW REWRITE ACCESS FULL| MV_LEDGER_2019    | 72486 |  2052K|    70   (2)| 00:00:01 |     8 |     8 |
----------------------------------------------------------------------------------------------------------------------
…
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("MV_LEDGER_2019"."ACCOUNT"="L2"."RANGE_FROM_10")
   3 - access("L2"."SELECTOR_NUM"=30984)
   4 - access("MV_LEDGER_2019"."BUSINESS_UNIT"="L1"."RANGE_FROM_05" AND
              "MV_LEDGER_2019"."CHARTFIELD1"="L"."RANGE_FROM_10")
   6 - access("L1"."SELECTOR_NUM"=30982)
   8 - access("L"."SELECTOR_NUM"=30985)
  10 - filter("MV_LEDGER_2019"."ACCOUNTING_PERIOD"=7)
Fiscal year 2020 period 7 doesn't rewrite, because the subpartition is stale.
Plan hash value: 1321682226
---------------------------------------------------------------------------------------------------------------------------------------------
|   Id  | Operation                                             | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------------------------------------
|     0 | SELECT STATEMENT                                      |                   |     1 |    92 |     7  (15)| 00:00:01 |       |       |
|     1 |  HASH GROUP BY                                        |                   |     1 |    92 |     7  (15)| 00:00:01 |       |       |
|- *  2 |   HASH JOIN                                           |                   |     1 |    92 |     6   (0)| 00:00:01 |       |       |
|     3 |    NESTED LOOPS                                       |                   |     1 |    92 |     6   (0)| 00:00:01 |       |       |
|-    4 |     STATISTICS COLLECTOR                              |                   |       |       |            |          |       |       |
|- *  5 |      HASH JOIN                                        |                   |     1 |    73 |     5   (0)| 00:00:01 |       |       |
|     6 |       NESTED LOOPS                                    |                   |     1 |    73 |     5   (0)| 00:00:01 |       |       |
|-    7 |        STATISTICS COLLECTOR                           |                   |       |       |            |          |       |       |
|- *  8 |         HASH JOIN                                     |                   |     1 |    55 |     4   (0)| 00:00:01 |       |       |
|     9 |          NESTED LOOPS                                 |                   |     1 |    55 |     4   (0)| 00:00:01 |       |       |
|-   10 |           STATISTICS COLLECTOR                        |                   |       |       |            |          |       |       |
|    11 |            PARTITION RANGE SINGLE                     |                   |     1 |    44 |     3   (0)| 00:00:01 |     8 |     8 |
|    12 |             PARTITION RANGE SINGLE                    |                   |     1 |    44 |     3   (0)| 00:00:01 |     3 |     3 |
|  * 13 |              TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PS_LEDGER         |     1 |    44 |     3   (0)| 00:00:01 |    31 |    31 |
|  * 14 |               INDEX RANGE SCAN                        | PSXLEDGER         |     1 |       |     2   (0)| 00:00:01 |    31 |    31 |
|  * 15 |           INDEX RANGE SCAN                            | PS_PSTREESELECT05 |     1 |    11 |     1   (0)| 00:00:01 |       |       |
|- * 16 |          INDEX RANGE SCAN                             | PS_PSTREESELECT05 |     1 |    11 |     1   (0)| 00:00:01 |       |       |
|  * 17 |        INDEX RANGE SCAN                               | PS_PSTREESELECT10 |     1 |    18 |     1   (0)| 00:00:01 |       |       |
|- * 18 |       INDEX RANGE SCAN                                | PS_PSTREESELECT10 |     1 |    18 |     1   (0)| 00:00:01 |       |       |
|  * 19 |     INDEX RANGE SCAN                                  | PS_PSTREESELECT10 |     1 |    19 |     1   (0)| 00:00:01 |       |       |
|- * 20 |    INDEX RANGE SCAN                                   | PS_PSTREESELECT10 |     1 |    19 |     1   (0)| 00:00:01 |       |       |
---------------------------------------------------------------------------------------------------------------------------------------------
…
Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("A"."ACCOUNT"="L2"."RANGE_FROM_10")
   5 - access("A"."CHARTFIELD1"="L"."RANGE_FROM_10")
   8 - access("A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05")
  13 - filter("A"."CURRENCY_CD"='GBP')
  14 - access("A"."LEDGER"='ACTUALS' AND "A"."FISCAL_YEAR"=2020 AND "A"."ACCOUNTING_PERIOD"=7)
  15 - access("L1"."SELECTOR_NUM"=30982 AND "A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05")
       filter("A"."BUSINESS_UNIT"="L1"."RANGE_FROM_05")
  16 - access("L1"."SELECTOR_NUM"=30982)
  17 - access("L"."SELECTOR_NUM"=30985 AND "A"."CHARTFIELD1"="L"."RANGE_FROM_10")
       filter("A"."CHARTFIELD1"="L"."RANGE_FROM_10")
  18 - access("L"."SELECTOR_NUM"=30985)
  19 - access("L2"."SELECTOR_NUM"=30984 AND "A"."ACCOUNT"="L2"."RANGE_FROM_10")
       filter("A"."ACCOUNT"="L2"."RANGE_FROM_10")
  20 - access("L2"."SELECTOR_NUM"=30984)
As we have already seen refresh processes all subpartitions for a partition. Now, not surprisingly, the refresh process truncates the partition for period 7 in both the 2019 and 2020 MVs even though only the 2020 data was affected. So because period 7 was stale in one fiscal year, it processed all fiscal years. We would have had the same problem if I had composite partitioned the materialized view to match table, it would have truncated and reprocessed fiscal yeares for period 7.
…
/* MV_REFRESH (ATB) */ ALTER TABLE "SCOTT"."MV_LEDGER_2019" TRUNCATE  PARTITION AP_07 UPDATE GLOBAL INDEXES
…
/* MV_REFRESH (INS) */ INSERT /*+ APPEND BYPASS_RECURSIVE_CHECK */  INTO "SCOTT"."MV_LEDGER_2019" PARTITION ( AP_07 ) ("BUSINESS_UNIT", 
"ACCOUNT", "CHARTFIELD1", "FISCAL_YEAR", "ACCOUNTING_PERIOD", "POSTED_TOTAL_AMT") SELECT  /*+ X_DYN_PRUNE */ "PS_LEDGER"."BUSINESS_UNIT" , 
"PS_LEDGER"."ACCOUNT" , "PS_LEDGER"."CHARTFIELD1" , "PS_LEDGER"."FISCAL_YEAR" , "PS_LEDGER"."ACCOUNTING_PERIOD" P0, 
SUM("PS_LEDGER"."POSTED_TOTAL_AMT")  FROM "PS_LEDGER" "PS_LEDGER" WHERE ("PS_LEDGER"."FISCAL_YEAR"=2019 AND "PS_LEDGER"."LEDGER"='ACTUALS' 
AND "PS_LEDGER"."CURRENCY_CD"='GBP') AND  ( ( (  (  (  ( "PS_LEDGER"."ACCOUNTING_PERIOD" >= 7 )  )  )  
AND  (  (  ( "PS_LEDGER"."ACCOUNTING_PERIOD" < 8 )  )  )  )  ) )GROUP BY 
"PS_LEDGER"."BUSINESS_UNIT","PS_LEDGER"."ACCOUNT","PS_LEDGER"."CHARTFIELD1","PS_LEDGER"."FISCAL_YEAR","PS_LEDGER"."ACCOUNTING_PERIOD"
…
/* MV_REFRESH (ATB) */ ALTER TABLE "SCOTT"."MV_LEDGER_2020" TRUNCATE  PARTITION AP_07 UPDATE GLOBAL INDEXES
…
/* MV_REFRESH (INS) */ INSERT /*+ APPEND BYPASS_RECURSIVE_CHECK */  INTO "SCOTT"."MV_LEDGER_2020" PARTITION ( AP_07 ) ("BUSINESS_UNIT", 
"ACCOUNT", "CHARTFIELD1", "FISCAL_YEAR", "ACCOUNTING_PERIOD", "POSTED_TOTAL_AMT") SELECT  /*+ X_DYN_PRUNE */ "PS_LEDGER"."BUSINESS_UNIT" , 
"PS_LEDGER"."ACCOUNT" , "PS_LEDGER"."CHARTFIELD1" , "PS_LEDGER"."FISCAL_YEAR" , "PS_LEDGER"."ACCOUNTING_PERIOD" P0, 
SUM("PS_LEDGER"."POSTED_TOTAL_AMT")  FROM "PS_LEDGER" "PS_LEDGER" WHERE ("PS_LEDGER"."FISCAL_YEAR"=2020 AND "PS_LEDGER"."LEDGER"='ACTUALS' 
AND "PS_LEDGER"."CURRENCY_CD"='GBP') AND  ( ( (  (  (  ( "PS_LEDGER"."ACCOUNTING_PERIOD" >= 7 )  )  )  
AND  (  (  ( "PS_LEDGER"."ACCOUNTING_PERIOD" < 8 )  )  )  )  ) )GROUP BY 
"PS_LEDGER"."BUSINESS_UNIT","PS_LEDGER"."ACCOUNT","PS_LEDGER"."CHARTFIELD1","PS_LEDGER"."FISCAL_YEAR","PS_LEDGER"."ACCOUNTING_PERIOD"
…
Partition pruning still worked correctly after swapping the partitioning and sub-partitioning columns. 
It also correctly controlled query rewrite. 
However, the PCT refresh processed all years for the single accounting period, rather than all accounting periods for the single year. That is less work if you have fewer fiscal years than accounting periods. Generally, I see systems only contain 3 to 6 fiscal years of data. However, it is also refreshing MVs that didn't need to be refreshed 
 Swapping the partitioning columns has also made the management of the partitions in the ledger table much more complicated.
  • I can't interval sub-partition, so I can't automatically add partitions for future fiscal years on demand. Instead, I am going to have to add new fiscal year subpartitions to each of the 1 4 range partitions.
  • I can't specify storage options or compression attributes on sub-partitions in the create table DDL command, so I have to come along afterwards with PL/SQL to alter the sub-partitions. 
On balance, I don't think I would choose to implement this.

Conclusion

PCT does track individually stale partitions and subpartitions, but the subsequent refresh is only done by partition. If one subpartition is stale, then all the subpartitions in that partition is refreshed. If you use composite partitioning then you may have to accept reprocessing more data than is absolutely necessary rather than create a partitioning strategy that is less effective. 
The subpartition key should be subordinate to the partition key. In the ledger example that I have used, I think it is better to partition by fiscal year and subpartition by accounting period (demonstration 5) than vice versa (demonstration 7). 
PCT doesn't work when there are multiple partitioning key columns. So you need to find a single partition key column that is used by the application that is sufficiently selective to restrict the number of partitions being refreshed. 
The partitioning on the table and the materialized view must be the same type of partitioning and on the same column. Otherwise, while PCT may still work, the refresh process may not be possible to populate the materialized view in direct-path mode, and it may not be possible to maintain compressed materialized views. 
There will be a balance to be struck. On the one hand application performance can be improved by partitioning application tables in a way that partition elimination is effective, but that partitioning strategy may not work with PCT. On the other, reporting performance can be improved maintaining fresh pre-aggregated data in materialized views, and PCT can help to keep the materialized fresh with less overhead.