Thursday, April 11, 2024

Configuring Shared Global Area (SGA) in a Multitenant Database

I have been working on a PeopleSoft Financials application that we have converted from a stand-alone database to be the only pluggable database (PDB) in an Oracle 19c container database (CDB).  We have been getting ORA-4031 (unable to allocate shared memory) errors in the PeopleSoft application.  

It has taken a while to solve and test, and I have to acknowledge quite a lot of advice from my friends.  

If you are wondering why you should be involved with your local Oracle user group, and regularly attend their meetings, this is an example: So you can ask people who have experience of different systems in different situations that you haven't encountered yet!

Documentation

I am going to look at 6 initialisation parameters that control the use of SGA.  The Oracle documentation, even in 21c, suggests they can mostly be set at CDB and PDB levels.  However, more recent Oracle guidance confirmed by my own experience suggests that is not a good idea.

  • SGA_MAX_SIZE can only be set at CDB level.  It sets the size of the shared memory segment that is the SGA.  It cannot be changed during the life of the database instance.  
    • Recommendation:
      • It can be useful to set it higher than SGA_TARGET if you plan either to increase SGA_TARGET, or add PDBs to the CDB, without restarting the instances.
  • SGA_TARGET "specifies the total size of all SGA components".  Use this parameter to control the memory usage of each PDB.  The setting at CDB must be at least the sum of the settings for each PDB.
    • Recommendations:
      • Use only this parameter at PDB level to manage the memory consumption of the PDB.
      • In a CDB with only a single PDB, set SGA_TARGET to the same value at CDB and PDB levels.  
      • Therefore, where there are multiple PDBs, SGA_TARGET at CDB level should be set to the sum of the setting for each PDB.  However, I haven't tested this yet.
      • There is no recommendation to reserve SGA for use by the CDB only, nor in my experience is there any need so to do.
  • SHARED_POOL_SIZE sets the minimum amount of shared memory reserved to the shared pool.  It can optionally be set in a PDB.  
    • Recommendation: However, do not set SHARED_POOL_SIZE at PDB level.  It can be set at CDB level.
  • DB_CACHE_SIZE sets the minimum amount of shared memory reserved to the buffer cache. It can optionally be set in a PDB.  
    • Recommendation: However, do not set DB_CACHE_SIZE at PDB level.  It can be set at CDB level.
  • SGA_MIN_SIZE has no effect at CDB level.  It can be set at PDB level at up to half of the manageable SGA
    • Recommendation: However, do not set SGA_MIN_SIZE.
  • INMEMORY_SIZE: If you are using in-memory query, this must be set at CDB level in order to reserve memory for the in-memory store.  The parameter defaults to 0, in which case in-memory query is not available.  The in-memory pool is not managed by Automatic Shared Memory Management (ASMM), but it does count toward the total SGA used in SGA_TARGET.
    • Recommendation: Therefore it must also be set in the PDB where in-memory is being used, otherwise we found (contrary to the documentation) that the parameter defaults to 0, and in-memory query will be disabled in that PDB.

Oracle Notes

There are a lot of Oracle support notes on the subject SGA management in a multi-tenant database.  The older nodes talk about setting memory parameters in the PDB, and a later note and a bug advises only setting these parameters at CDB level, and not at all in the PDB.

  • About memory configuration parameter on each PDBs (Doc ID 2655314.1)November 2023
    • As a best practice, please do not to set SHARED_POOL_SIZE and DB_CACHE_SIZE on each PDBs and please manage automatically by setting SGA_TARGET.
    • "This best practice is confirmed by development in Bug 30692720"
    • Bug 30692720 discusses how the parameters are validated.  Eg. "Sum(PDB sga size) > CDB sga size"
    • Bug 34079542: "Unset sga_min_size parameter in PDB."

SGA Management with a Parse Intensive System (PeopleSoft).

PeopleSoft systems dynamically generate lots of non-shareable SQL code.  This leads to lots of parse and consumes more shared pool.  ASMM can respond by shrinking the buffer cache and growing the shared pool.  However, this can lead to more physical I/O and degrade performance and it is not beneficial for the database to cache dynamic SQL statements that are not going to be executed again.  Other parse-intensive systems can also exhibit this behaviour.

In PeopleSoft, I normally set DB_CACHE_SIZE and SHARED_POOL_SIZE to minimum values to stop ASMM shuffling too far in either direction.  With a large SGA, moving memory between these pools can become a performance problem in its own right.  

We removed SHARED_POOL_SIZE, DB_CACHE_SIZE and SGA_MIN_SIZE settings from the PDB.  The only SGA parameters set at PDB level are SGA_TARGET and INMEMORY_SIZE.  

SHARED_POOL_SIZE and DB_CACHE_SIZE are set as I usually would for PeopleSoft, but at CDB level to guarantee a minimum buffer cache size.  

This is straightforward when there is only one PDB in the CDB.   I have yet to see what happens when I have another active PDB with a non-PeopleSoft system and a different kind of workload that puts less stress on the shared pool and more on the buffer cache.

TL;DR

  • Do not set any SGA parameter in a PDB other than SGA_TARGET and (if necessary) INMEMORY_SIZE.
  • Do not set DB_CACHE_SIZE, SHARED_POOL_SIZE at PDB level.  They can be set at CDB level. 
  • Do not set SGA_MIN_SIZE at either PDB or CDB level.


Thursday, February 22, 2024

Table Clusters: 6. Testing the Cluster & Conclusion (TL;DR)

This post is the last part of a series that discusses table clustering in Oracle.

  1. Introduction and Ancient History
  2. Cluster & Cluster Key Design Considerations
  3. Populating the Cluster with DBMS_PARALLEL_EXECUTE
  4. Checking the Cluster Key
  5. Using the Cluster Key Index instead of the Primary/Unique Key Index
  6. Testing the Cluster & Conclusion (TL;DR)

Testing

We did get improved performance with the clustered tables.  More significantly, we encountered less inter-process contention, and so were able to run more concurrent processes, and the overall elapsed time of all the processes was reduced.

Looking at just the performance of the bulk delete statements on the result tables, there is a significant reduction in DB time and physical I/O time on the clustered tables.  The reduction in physical I/O is not only because the table is smaller, but because there is no need to perform consistent read recovery on the blocks, there are fewer reads from the undo segment and less CPU was consumed creating consistent read copies in the buffer cache.

Statement Heap Table Clustered Table
DELETE FROM PS_GP_RSLT_ACUM…
DB Time (s)

2182

1662

delete statement only db file sequential

1451

891

CPU

941

531


Statement Heap Table Clustered Table
DELETE FROM PS_GP_RSLT_ABS…
DB Time (s)

781

330

delete statement only db file sequential

340

210

CPU

300

120

GP_RSLT_PIN is another, albeit smaller, result table.  It is a candidate for clustering, however, it was not clustered for this test and therefore did not show any significant improvement.  It was subsequently clustered.

Statement

Heap Table Heap in Cluster Test
DELETE FROM PS_GP_RSLT_PIN…
DB Time (s)

270

250

delete statement only db file sequential

110

120

CPU

110

90

The execution plans for some queries on clustered tables changed to use the cluster key index which resulted in poorer performance.  I had to introduce some SQL profiles to reinstate the original execution plans.  
However, the execution plans for these delete statements also switched to the cluster key index resulting in improved performance.  So it depends.

Conclusion (TL;DR)

Table partitioning can help you find data efficiently by allowing the database to eliminate partitions that cannot contain the data.  However, you must be running Enterprise Edition and license the partitioning option.
Table clustering is effective when you are regularly querying data from multiple tables with similar keys, and you can store them in the same data blocks, thus saving the overhead of retrieving multiple blocks.  It is available on any Oracle database and does not require any additional licence.
Both partitioning and clustering can help avoid the overhead of read consistency by storing dissimilar data in different blocks.
Sometimes, using the cluster key index can result in worse performance than using the original indexes.  A SQL profile or SQL baseline may be needed to stabilise some execution plans.

Monday, February 19, 2024

Table Clusters: 5. Using the Cluster Key Index instead of the Primary/Unique Key Index

This post is part of a series that discusses table clustering in Oracle.

  1. Introduction and Ancient History
  2. Cluster & Cluster Key Design Considerations
  3. Populating the Cluster with DBMS_PARALLEL_EXECUTE
  4. Checking the Cluster Key
  5. Using the Cluster Key Index instead of the Primary/Unique Key Index
  6. Testing the Cluster & Conclusion (TL;DR)

In my test case, the cluster key index is made up of the first 7 columns of the unique key index.  One side-effect of this similarity of the keys is that the optimizer may choose to use the cluster key index where previously it used the unique index.  

The cluster key index is a unique index.  It contains only one entry for each distinct cluster key value that points to the first block that contains rows with those cluster key values.  As we saw in the previous post, there are many rows in the table for each distinct cluster key.  Therefore, the cluster key index is much smaller than the unique index on any table in the cluster.  This contributes to making it appear cheaper to access.

The clustering factor is fundamental to determining the cost of using an index.  It is a measure of how many I/Os the database would perform if it were to read every row in that table via the index in index order.  Notwithstanding that blocks may be cached, every time the scan changes to a different data block in the table, that is another I/O.  

In my case, the clustering factor of the cluster key index is also the same value as the number of rows and the number of distinct keys.  This is because I have set the cluster size equal to the block size so that each cluster key value points to a different block, and each block only contains rows for a single cluster key value.  The clustering factor of the cluster key index is much lower than that of the unique indexes, also making it look cheaper to access.
TABLE_NAME           INDEX_NAME               UNIQUENES PREFIX_LENGTH LEAF_BLOCKS DISTINCT_KEYS   NUM_ROWS CLUSTERING_FACTOR
-------------------- ------------------------ --------- ------------- ----------- ------------- ---------- -----------------
PS_GP_RSLT_CLUSTER   PS_GP_RSLT_CLUSTER_IDX   UNIQUE                       111541       8875383    8875383           8875383
PS_GP_RSLT_ABS       PS_GP_RSLT_ABS           UNIQUE                8     1271559     152019130  152019130          10806251
PS_GP_RSLT_ACUM      PS_GP_RSLT_ACUM          UNIQUE                8     8421658     762210387  762210387         101166426
PS_GP_RSLT_PIN       PS_GP_RSLT_PIN           UNIQUE                9     3894799     327189471  327189471          31774871

I still need to create the unique index on the tables to enforce uniqueness. I have found that the optimizer tends to choose the cluster key index in preference to the unique index. The cost of accessing cluster key index is lower because it is smaller, and has a lower clustering factor. When I increased the length of the cluster key from 3 to 7 columns I also found that the size and clustering factor of the cluster key index increased, and the clustering factor for the unique indexes decreased, partly because the rows are less disordered with respect to the index key, and partly because the size of the table decreased because each cluster key is only stored one. Although this reduced the cost of accessing the unique indexes, I still find the optimizer tends to choose the cluster key index over the unique index.

Sometimes, the switch to the cluster key index is beneficial, but sometimes performance degrades as in the case of this query.
SELECT … 
FROM PS_GP_RSLT_ACUM RA ,PS_GP_ACCUMULATOR A ,PS_GP_PYE_HIST_WRK H 
WHERE H.EMPLID BETWEEN :1 AND :2 AND H.CAL_RUN_ID=:3 
AND H.RUN_CNTL_ID=:4 AND H.OPRID=:5 
AND H.EMPLID=RA.EMPLID 
AND H.EMPL_RCD=RA.EMPL_RCD 
AND H.GP_PAYGROUP=RA.GP_PAYGROUP 
AND H.CAL_ID=RA.CAL_ID 
AND H.ORIG_CAL_RUN_ID=RA.ORIG_CAL_RUN_ID 
AND H.HIST_CAL_RUN_ID=RA.CAL_RUN_ID 
AND H.RSLT_SEG_NUM=RA.RSLT_SEG_NUM 
AND RA.PIN_NUM=A.PIN_NUM 
AND RA.ACM_PRD_OPTN<>'1' 
AND(H.CALC_TYPE=A.CALC_TYPE OR H.HIST_TYPE= 'G') 
ORDER BY RA.EMPLID,H.PRC_ORD_TS,RA.EMPL_RCD,RA.PIN_NUM
PS_GP_PYE_HIST_WRK is equi-joined to PS_GP_RSLT_ACUM by all 7 cluster key columns, so the cluster key index can satisfy this join.  The plan has switched to using the cluster key index.

Plan hash value: 4007126853
 
-------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                         |       |       |  2369 (100)|          |
|   1 |  SORT ORDER BY                          |                         |   133 | 36841 |  2369   (1)| 00:00:01 |
|*  2 |   FILTER                                |                         |       |       |            |          |
|*  3 |    HASH JOIN                            |                         |   133 | 36841 |  2368   (1)| 00:00:01 |
|   4 |     NESTED LOOPS                        |                         |   393 |   103K|  2348   (1)| 00:00:01 |
|   5 |      TABLE ACCESS BY INDEX ROWID BATCHED| PS_GP_PYE_HIST_WRK      |  1164 |   156K|    12   (0)| 00:00:01 |
|*  6 |       INDEX RANGE SCAN                  | PS_GP_PYE_HIST_WRK      |     1 |       |    11   (0)| 00:00:01 |
|*  7 |      TABLE ACCESS CLUSTER               | PS_GP_RSLT_ACUM         |     1 |   132 |     3   (0)| 00:00:01 |
|*  8 |       INDEX UNIQUE SCAN                 | PS_GP_RSLT_CLUSTER_IDX  |     1 |       |     1   (0)| 00:00:01 |
|   9 |     INDEX FAST FULL SCAN                | PSBGP_ACCUMULATOR       |  9208 | 64456 |    20   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------------
The profile of the ASH data by plan line ID shows that most of the time is spent on physical I/O on line 7 of the plan, physically scanning the blocks in the cluster for each cluster key
   SQL Plan         SQL Plan                                               H   P E        ASH
 Hash Value          Line ID EVENT                                         P     x       Secs
----------- ---------------- --------------------------------------------- --- - --- --------
 4007126853                7 db file sequential read                       N   N Y        120
 4007126853                  db file sequential read                       N   N Y         80
I can force the plan back to using the unique index on PS_GP_RSLT_ACUM with a hint, SQL Profile, SQL Patch, or SQL Plan Baseline, and there is a reduction in database response time.
NB: You cannot make a cluster key index invisible.
Plan hash value: 1843812660
 
------------------------------------------------------------------------------------------------------------------
|   Id  | Operation                                 | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------------
|     0 | SELECT STATEMENT                          |                    |       |       |   845 (100)|          |
|     1 |  SORT ORDER BY                            |                    |     1 |   277 |   845   (1)| 00:00:01 |
|  *  2 |   FILTER                                  |                    |       |       |            |          |
|  *  3 |    HASH JOIN                              |                    |     1 |   277 |   844   (1)| 00:00:01 |
|-    4 |     NESTED LOOPS                          |                    |     1 |   277 |   844   (1)| 00:00:01 |
|-    5 |      STATISTICS COLLECTOR                 |                    |       |       |            |          |
|     6 |       NESTED LOOPS                        |                    |     1 |   270 |   843   (1)| 00:00:01 |
|     7 |        TABLE ACCESS BY INDEX ROWID        | PS_GP_PYE_HIST_WRK |   416 | 57408 |     6   (0)| 00:00:01 |
|  *  8 |         INDEX RANGE SCAN                  | PS_GP_PYE_HIST_WRK |     1 |       |     5   (0)| 00:00:01 |
|  *  9 |        TABLE ACCESS BY INDEX ROWID BATCHED| PS_GP_RSLT_ACUM    |     1 |   132 |     5   (0)| 00:00:01 |
|  * 10 |         INDEX RANGE SCAN                  | PS_GP_RSLT_ACUM    |     1 |       |     4   (0)| 00:00:01 |
|- * 11 |      INDEX RANGE SCAN                     | PSBGP_ACCUMULATOR  |     1 |     7 |     1   (0)| 00:00:01 |
|    12 |     INDEX FAST FULL SCAN                  | PSBGP_ACCUMULATOR  |     1 |     7 |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------------

   SQL Plan         SQL Plan                                               H   P E        ASH
 Hash Value          Line ID EVENT                                         P     x       Secs
----------- ---------------- --------------------------------------------- --- - --- --------
 1843812660               10 db file sequential read                       N   N Y         70
 1843812660                9 db file sequential read                       N   N Y         60
 1843812660                  CPU+CPU Wait                                  N   N Y         50

Table Cached Blocks 

The table_cached_blocks statistics preference specifies the average number of blocks assumed to be cached in the buffer cache when calculating the index clustering factor. When DBMS_STATS calculates the clustering factor of an index it does not count visits to table blocks assumed to be cached because they were in the last n distinct table blocks visit, where n is the value to which table_cached_blocks is set.

We have already seen that with 7 cluster key columns, no more than 7 blocks are required to hold any one cluster key.  If I set table cached blocks to at least 7, then when Oracle scans the table blocks in unique key order (which matches the cluster key order for the first 7 columns) it does not count additional visits to blocks for the same cluster key. Thus we see a reduction in the clustering factor on the unique index. There is no advantage to a higher value of this setting. We do not see a significant reduction in the clustering factor on other indexes with different leading columns. 

TCB=1
TABLE_NAME           INDEX_NAME               PREFIX_LENGTH LEAF_BLOCKS   NUM_ROWS CLUSTERING_FACTOR DEGREE     LAST_ANALYZED    
-------------------- ------------------------ ------------- ----------- ---------- ----------------- ---------- -----------------
PS_GP_RSLT_ABS       PS_GP_RSLT_ABS                       8     1271559  152019130          10806251 1          12-01-24 15:33:02
PS_GP_RSLT_ACUM      PS_GP_RSLT_ACUM                      8     8421658  762210387         101166426 1          12-01-24 15:37:55
PS_GP_RSLT_PIN       PS_GP_RSLT_PIN                       9     3894799  327189471          31774872 1          12-01-24 15:39:00
TCB=8
 TABLE_NAME           INDEX_NAME               PREFIX_LENGTH LEAF_BLOCKS   NUM_ROWS CLUSTERING_FACTOR DEGREE     LAST_ANALYZED    
-------------------- ------------------------ ------------- ----------- ---------- ----------------- ---------- -----------------
PS_GP_RSLT_ABS       PS_GP_RSLT_ABS                       8     1271559  152019130           8217000 1          12-01-24 15:05:42
PS_GP_RSLT_ACUM      PS_GP_RSLT_ACUM                      8     8421658  762210387          16658798 1          12-01-24 15:10:40
PS_GP_RSLT_PIN       PS_GP_RSLT_PIN                       9     3894799  327189471          11321888 1          12-01-24 15:01:37

TCB=16

TABLE_NAME           INDEX_NAME               PREFIX_LENGTH LEAF_BLOCKS   NUM_ROWS CLUSTERING_FACTOR DEGREE     LAST_ANALYZED    
-------------------- ------------------------ ------------- ----------- ---------- ----------------- ---------- -----------------
PS_GP_RSLT_ABS       PS_GP_RSLT_ABS                       8     1271559  152019130           8217000 1          12-01-24 15:44:25
PS_GP_RSLT_ACUM      PS_GP_RSLT_ACUM                      8     8421658  762210387          16658710 1          12-01-24 15:49:29
PS_GP_RSLT_PIN       PS_GP_RSLT_PIN                       9     3894799  327189471          11321888 1          12-01-24 15:50:36

The reduction in the clustering factor can mitigate the optimizer's tendency to use the cluster key index, but it may still occur.

NB: table_cached_blocks applies only when gathering statistics with DBMS_STATS, and not to CREATE INDEX or REBUILD INDEX operations that use the default value of 1.  This is not a bug, it is in the DBMS_STATS documentation.  

See also

TL;DR

The statistics on the cluster key index may lead the optimizer to determine the cost of using it is lower than the unique index.  The switch from the unique/primary key index to the cluster key index may result in poorer performance.  Setting Table Cached Blocks on the tables in the cluster may help.  However, you may still need to use SQL Profiles/SQL Plan Baselines/SQL Patches to force the optimizer to continue to use the unique indexes.

Friday, February 16, 2024

Table Clusters: 4. Checking the Cluster Key

This post is part of a series that discusses table clustering in Oracle.
  1. Introduction and Ancient History
  2. Cluster & Cluster Key Design Considerations
  3. Populating the Cluster with DBMS_PARALLEL_EXECUTE
  4. Checking the Cluster Key
  5. Using the Cluster Key Index instead of the Primary/Unique Key Index
  6. Testing the Cluster & Conclusion (TL;DR)

This query calculates the frequency of each number of distinct blocks per cluster key.  It uses DBMS_ROWID to get the block number from the ROWID.  The query counts the number of distinct blocks per cluster key, and the number of times that number of blocks per key occurs.

with x as ( --cluster key and rowid of each row
  select emplid, cal_run_id, empl_rcd, gp_paygroup, cal_id, ORIG_CAL_RUN_ID, RSLT_SEG_NUM
  ,      DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid) block_no from ps_gp_rslt_abs
), y as ( --count number of rows per cluster key and block number
  select /*+MATERIALIZE*/ emplid, cal_run_id, empl_rcd, gp_paygroup, cal_id, ORIG_CAL_RUN_ID, RSLT_SEG_NUM
  ,      block_no, count(*) num_rows 
  from   x   
  group by emplid, cal_run_id, empl_rcd, gp_paygroup, cal_id, ORIG_CAL_RUN_ID, RSLT_SEG_NUM, block_no
), z as ( --count number of blocks and rows per cluster key
  select /*+MATERIALIZE*/ emplid, cal_run_id, empl_rcd, gp_paygroup, cal_id, ORIG_CAL_RUN_ID, RSLT_SEG_NUM
  ,      count(distinct block_no) num_blocks, sum(num_rows) num_rows 
  from   y
  group by emplid, cal_run_id, empl_rcd, gp_paygroup, cal_id, ORIG_CAL_RUN_ID, RSLT_SEG_NUM
)
select num_blocks, count(distinct emplid) emplids
,      sum(num_rows) sum_rows
,      median(num_rows) median_rows
,      median(num_rows)/num_blocks median_rows_per_block
from   z
group by num_blocks
order by num_blocks
/

The answer you get depends on the data, so your mileage will vary.  

Initially, I built the cluster with 3 columns in the key.  In my case, 81% of rows were organised such that they have no more than 2 data blocks per cluster key.   

NUM_BLOCKS    EMPLIDS   SUM_ROWS MEDIAN_ROWS MEDIAN_ROWS_PER_BLOCK
---------- ---------- ---------- ----------- ---------------------
         1      69638   46809975          12                    12
         2      47629   78370682          34                    17
         3      12120   14330976          68            22.6666667
         4       4598    4395844          94                  23.5
         5       2376    6941389         124                  24.8
         6        652    2510790         155            25.8333333
         7         27      34527         185            26.4285714
         8         14      12330         217                27.125
         9          9      40633         248            27.5555556
        10          1      14607         279                  27.9
        11          1        310         310            28.1818182
        12          2       2212         310            25.8333333
        13          1       1476         372            28.6153846
        14          1        372         372            26.5714286
I rebuilt the cluster with 7 key columns.  Now no cluster key has more than 7 blocks, most of the keys are in a single block, and 85% are in no more than 2.  Increasing the length of the cluster key also resulted in the table being smaller because each cluster key is only stored once.
NUM_BLOCKS    EMPLIDS   SUM_ROWS MEDIAN_ROWS    MEDIAN_ROWS_PER_BLOCK
---------- ---------- ---------- -----------  ---------------------
         1      74545   71067239          14                    14
         2      52943   57481538          40                    20
         3      13553   11185685          73            24.3333333
         4       4567    8949787         120                    30
         5       1327    3251707         150                    30
         6        144      81977         160            26.6666667
         7          3       1197       204.5            29.2142857
There is now only a small number of employees whose data is spread across many cluster blocks.  They might be slower to access, but I think I have a reasonable balance.

Thursday, February 15, 2024

Table Clusters: 3. Populating the Cluster with DBMS_PARALLEL_EXECUTE

This post is part of a series that discusses table clustering in Oracle.

The result tables being clustered are also large, containing hundreds of millions of rows.  Normally, when I have to rebuild these as non-clustered tables, I would do so in direct-path mode and with both parallel insert and parallel query. However, this is not effective for table clusters, particularly if you put multiple tables in one cluster, as rows with the same cluster key have to go into the same data blocks.

Instead, for each result table in the cluster, I have used DBMS_PARALLEL_EXECUTE to take a simple INSERT…SELECT statement, and break it into pieces that can be run concurrently on the database job scheduler.  I get the parallelism, though I also have to accept the redo on the insert.

exec DBMS_PARALLEL_EXECUTE.DROP_TASK('CLUSTER_GP_RSLT_ABS');

DECLARE
  l_recname VARCHAR2(15) := 'GP_RSLT_ABS';
  l_src_prefix VARCHAR2(10) := 'ORIG_';
  l_task VARCHAR2(30);
  l_sql_stmt CLOB;
  l_col_list CLOB;
BEGIN
  l_task := 'CLUSTER_'||l_recname;
  
  SELECT LISTAGG(column_name,',') WITHIN GROUP(ORDER BY column_id) 
  INTO l_col_list 
  FROM user_tab_cols WHERE table_name = l_src_prefix||l_recname;
  
  l_sql_stmt := 'insert into PSY'||l_recname||' ('||l_col_list||') SELECT '||l_col_list
              ||' FROM '||l_src_prefix||l_recname||' WHERE rowid BETWEEN :start_id AND :end_id';
  
  DBMS_PARALLEL_EXECUTE.CREATE_TASK (l_task);
  DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_ROWID(l_task, 'SYSADM', l_src_prefix||l_recname, true, 2e6);
  DBMS_PARALLEL_EXECUTE.RUN_TASK(l_task, l_sql_stmt, DBMS_SQL.NATIVE, parallel_level => 24);
END;
/

The performance of this process is the first indication as to whether the cluster key is correct.  Too few columns and the population of the table will be much slower because rows have to go in the block already allocated to that cluster key, or if full a new block must be allocated.  

NB: Chunking the data by ROWID only works where the source table is a regular table.  It does not work for clustered or index-organised tables.  The alternative is to chunk by the value of a numeric column, and that doesn't work well in this case because most of the key columns are strings or dates.

Monitoring DBMS_PARALLEL_EXECUTE

There are several views provided by Oracle that can be used to monitor tasks created by DBMS_PARALLEL_EXECUTE.
SELECT * FROM user_parallel_execute_tasks;
                                                                            Number                                                
TASK_NAME            CHUNK_TYPE   STATUS     TABLE_OWNER TABLE_NAME         Column     TASK_COMMENT                   JOB_PREFIX  
-------------------- ------------ ---------- ----------- ------------------ ---------- ------------------------------ ------------
                                                                                               Apply                                           
                                                                                 Lang          X Ed    Fire_ Parallel                     
SQL_STMT                                                                         Flag EDITION  Trigger Apply    Level JOB_CLASS           
-------------------------------------------------------------------------------- ---- -------- ------- ----- -------- -----------------
CLUSTER_GP_RSLT_ABS  ROWID_RANGE  FINISHED   SYSADM      PS_GP_RSLT_ABS                                               TASK$_38380  
insert into PSYGP_RSLT_ABS (EMPLID,CAL_RUN_ID,EMPL_RCD,GP_PAYGROUP,CAL_ID,ORIG_C    1 ORA$BASE         TRUE        24 DEFAULT_JOB_CLASS   

CLUSTER_GP_RSLT_ACUM ROWID_RANGE  FINISHED   SYSADM      PS_GP_RSLT_ACUM                                              TASK$_38382  
insert into PSYGP_RSLT_ACUM (EMPLID,CAL_RUN_ID,EMPL_RCD,GP_PAYGROUP,CAL_ID,ORIG_    1 ORA$BASE         TRUE        32 DEFAULT_JOB_CLASS
Each task is broken into chunks.
SELECT task_name, status, count(*) chunks
, min(start_ts) min_start_ts, max(end_ts) max_end_ts
, max(end_ts)-min(start_ts) duration
FROM user_parallel_execute_chunks 
group by task_name, status
order by min_start_ts nulls last
/TASK_NAME            STATUS         CHUNKS MIN_START_TS            MAX_END_TS              DURATION           
-------------------- ---------- ---------- ----------------------- ----------------------- -------------------
CLUSTER_GP_RSLT_ABS  PROCESSED          80 22/12/2023 09.58.37.712 22/12/2023 10.06.32.264 +00 00:07:54.551373
CLUSTER_GP_RSLT_ACUM PROCESSED         402 22/12/2023 10.08.58.257 22/12/2023 10.38.36.820 +00 00:29:38.562700
In this case, each chunk processes a range of ROWIDs.  Each chunk is allocated to a database scheduler job.
SELECT chunk_id, task_name, status, start_rowid, end_rowid, job_name, start_ts, end_ts, error_code, error_message
FROM user_parallel_execute_chunks 
WHERE task_name = 'CLUSTER_GP_RSLT_ABS'
ORDER BY chunk_id
/

Chunk                                                                                                                                                                                          
   ID TASK_NAME            STATUS     START_ROWID        END_ROWID          JOB_NAME        START_TS                END_TS                  ERROR_CODE ERROR_MESSAGE                           
----- -------------------- ---------- ------------------ ------------------ --------------- ----------------------- ----------------------- ---------- -----------------
    1 CLUSTER_GP_RSLT_ABS  PROCESSED  AAAUzUAAmAAAZgAAAA AAAUzUAAmAADp7VH// TASK$_38380_1   22/12/2023 09:58:37.712 22/12/2023 10:00:21.622                                                    
    2 CLUSTER_GP_RSLT_ABS  PROCESSED  AAAUzUAAmAADp7WAAA AAAUzUAAmAAGwkrH// TASK$_38380_3   22/12/2023 09:58:37.713 22/12/2023 10:00:20.107                                                    
    3 CLUSTER_GP_RSLT_ABS  PROCESSED  AAAUzUAAmAAGwksAAA AAAUzUAAmAAHvwBH// TASK$_38380_2   22/12/2023 09:58:37.713 22/12/2023 10:00:14.939                                                    
    4 CLUSTER_GP_RSLT_ABS  PROCESSED  AAAUzUAAmAAHvwCAAA AAAUzUAAmAAIn5XH// TASK$_38380_9   22/12/2023 09:58:37.864 22/12/2023 10:00:28.963                                                    
    5 CLUSTER_GP_RSLT_ABS  PROCESSED  AAAUzUAAmAAIn5YAAA AAAUzUAAmAAJ58tH// TASK$_38380_12  22/12/2023 09:58:37.865 22/12/2023 10:00:30.494                                                    
    6 CLUSTER_GP_RSLT_ABS  PROCESSED  AAAUzUAAmAAJ58uAAA AAAUzUAAmAAKzADH// TASK$_38380_8   22/12/2023 09:58:37.865 22/12/2023 10:00:26.049                                                    
    7 CLUSTER_GP_RSLT_ABS  PROCESSED  AAAUzUAAmAAKzAEAAA AAAUzUAAmAALf7ZH// TASK$_38380_4   22/12/2023 09:58:37.865 22/12/2023 10:00:28.017                                                    
    8 CLUSTER_GP_RSLT_ABS  PROCESSED  AAAUzUAAmAALf7aAAA AAAUzUAAmAAMHGvH// TASK$_38380_10  22/12/2023 09:58:37.885 22/12/2023 10:00:23.326                                                    
    9 CLUSTER_GP_RSLT_ABS  PROCESSED  AAAUzUAAmAAMHGwAAA AAAUzUAAmAAP5aFH// TASK$_38380_13  22/12/2023 09:58:37.907 22/12/2023 10:00:22.660                                                    
   10 CLUSTER_GP_RSLT_ABS  PROCESSED  AAAUzUAAmAAP5aGAAA AAAUzUAAnAACr1bH// TASK$_38380_5   22/12/2023 09:58:37.929 22/12/2023 10:00:21.959
…
However, one job may process many chunks.
SELECT t.task_name, t.chunk_type, t.table_name, c.chunk_id, c.job_name, c.start_ts, c.end_ts
, d.actual_start_date, d.run_duration, d.instance_id, d.session_id
FROM user_parallel_execute_tasks t
JOIN user_parallel_execute_chunks c ON c.task_name = t.task_name
JOIN user_scheduler_job_run_details d ON d.job_name = c.job_name
WHERE t.task_name = 'CLUSTER_GP_RSLT_ABS'
ORDER BY t.task_name, c.job_name, c.start_ts
/
                                                  Chunk                                                                                                             Inst             
TASK_NAME            CHUNK_TYPE   TABLE_NAME         ID JOB_NAME        START_TS                END_TS                  ACTUAL_START_DATE       RUN_DURATION          ID SESSION_ID  
-------------------- ------------ --------------- ----- --------------- ----------------------- ----------------------- ----------------------- ------------------- ---- ------------
CLUSTER_GP_RSLT_ABS  ROWID_RANGE  PS_GP_RSLT_ABS      1 TASK$_38380_1   22/12/2023 09:58:37.712 22/12/2023 10:00:21.622 22/12/2023 09:58:37.660 +00 00:07:52.000000    1 3406,24003  
CLUSTER_GP_RSLT_ABS  ROWID_RANGE  PS_GP_RSLT_ABS     23 TASK$_38380_1   22/12/2023 10:00:21.710 22/12/2023 10:02:01.916 22/12/2023 09:58:37.660 +00 00:07:52.000000    1 3406,24003  
CLUSTER_GP_RSLT_ABS  ROWID_RANGE  PS_GP_RSLT_ABS     44 TASK$_38380_1   22/12/2023 10:02:02.008 22/12/2023 10:03:31.546 22/12/2023 09:58:37.660 +00 00:07:52.000000    1 3406,24003  
CLUSTER_GP_RSLT_ABS  ROWID_RANGE  PS_GP_RSLT_ABS     57 TASK$_38380_1   22/12/2023 10:03:31.640 22/12/2023 10:05:05.398 22/12/2023 09:58:37.660 +00 00:07:52.000000    1 3406,24003  
CLUSTER_GP_RSLT_ABS  ROWID_RANGE  PS_GP_RSLT_ABS     73 TASK$_38380_1   22/12/2023 10:05:05.494 22/12/2023 10:06:29.262 22/12/2023 09:58:37.660 +00 00:07:52.000000    1 3406,24003  

CLUSTER_GP_RSLT_ABS  ROWID_RANGE  PS_GP_RSLT_ABS      8 TASK$_38380_10  22/12/2023 09:58:37.885 22/12/2023 10:00:23.326 22/12/2023 09:58:37.877 +00 00:07:54.000000    1 4904,44975  
CLUSTER_GP_RSLT_ABS  ROWID_RANGE  PS_GP_RSLT_ABS     27 TASK$_38380_10  22/12/2023 10:00:23.394 22/12/2023 10:01:59.096 22/12/2023 09:58:37.877 +00 00:07:54.000000    1 4904,44975  
CLUSTER_GP_RSLT_ABS  ROWID_RANGE  PS_GP_RSLT_ABS     42 TASK$_38380_10  22/12/2023 10:01:59.185 22/12/2023 10:03:37.657 22/12/2023 09:58:37.877 +00 00:07:54.000000    1 4904,44975  
CLUSTER_GP_RSLT_ABS  ROWID_RANGE  PS_GP_RSLT_ABS     61 TASK$_38380_10  22/12/2023 10:03:37.742 22/12/2023 10:05:12.680 22/12/2023 09:58:37.877 +00 00:07:54.000000    1 4904,44975  
CLUSTER_GP_RSLT_ABS  ROWID_RANGE  PS_GP_RSLT_ABS     79 TASK$_38380_10  22/12/2023 10:05:12.776 22/12/2023 10:06:32.142 22/12/2023 09:58:37.877 +00 00:07:54.000000    1 4904,44975  
…
You can also judge how well the clustering is working by looking at how much database time was consumed by the various events.  PS_GP_RSLT_ABS was inserted first, then PS_GP_RSLT_ACUM.  We can see that more time was spent on the second table that was inserted, and more time spent on physical read operations as rows have to go into specific blocks with the same cluster keys.
select c.task_name, c.status, count(distinct c.chunk_id) chunks, h.module, h.event
, sum(usecs_per_Row)/1e6 ash_secs
from gv$active_session_history h
, user_parallel_execute_chunks c
, user_parallel_execute_tasks t
where h.sample_time BETWEEN c.start_ts AND NVL(c.end_ts,SYSDATE)
and t.task_name = c.task_name
and h.action like c.job_name
group by c.task_name, c.status, h.module, h.event
order by task_name, ash_Secs desc
/
TASK_NAME            STATUS     CHUNKS MODULE          EVENT                                                            ASH_SECS
-------------------- ---------- ------ --------------- ---------------------------------------------------------------- --------
CLUSTER_GP_RSLT_ABS  PROCESSED      80 DBMS_SCHEDULER                                                                       3534
                     PROCESSED      78 DBMS_SCHEDULER  enq: FB - contention                                                 1184
                     PROCESSED      80 DBMS_SCHEDULER  db file parallel read                                                1161
                     PROCESSED      80 DBMS_SCHEDULER  buffer busy waits                                                     674
                     PROCESSED      79 DBMS_SCHEDULER  db file scattered read                                                490
…

CLUSTER_GP_RSLT_ACUM PROCESSED     401 DBMS_SCHEDULER                                                                      10174
                     PROCESSED     401 DBMS_SCHEDULER  db file sequential read                                              8813
                     PROCESSED      32 DBMS_SCHEDULER  log file switch (archiving needed)                                   4623
                     PROCESSED     389 DBMS_SCHEDULER  db file parallel read                                                1396
                     PROCESSED     383 DBMS_SCHEDULER  db file scattered read                                               1346
                     PROCESSED     295 DBMS_SCHEDULER  buffer busy waits                                                     769
                     PROCESSED     287 DBMS_SCHEDULER  enq: FB - contention                                                  715
…