Thursday, March 26, 2026

Loading and Processing JSON with PL/SQL

This blog is part of a series about aspects and features of Oracle 26 and Autonomous Transaction Processing Database.

"JSON (JavaScript Object Notation) is a text-based format for storing and exchanging data in a way that’s both human-readable and machine-parsable. … it has grown into a very capable data format that simplifies data interchange across diverse platforms and programming languages."

I have created a demo project that integrates Strava (an application that tracks athletes' activities – in my case, cycling) with an Oracle Autonomous Database, that then performs some spatial data processing.  The Strava APIs all return data in JSON.  My application, written in PL/SQL, reads and processes those messages.

Mostly, I want to hold that data in a regular database table, structured conventionally in reasonably named columns.  In different places, I have loaded that JSON data in different ways, depending on requirements and circumstances.  There are three options to choose from:
  1. Directly through a JSON Duality View
  2. Convert each name-value pair in explicit code
  3. Extract Values from JSON in Virtual Columns

Directly through a JSON Duality View

Oracle introduced JSON Relational Duality in Oracle 23ai.  A JSON duality view is a mapping between table data and JSON documents.  It is possible to extract data from a table as a JSON document simply by querying a duality view based on that table.  It is also possible to insert data into the table through duality views.

For example, Strava tracks my gear (the bike I ride, or the shoes I wear).  I can extract details of each item with the Strava API, and I get a simple JSON document in return.  This is what I get for one of my bikes.
{
  "id" : "b4922223",
  "primary" : false,
  "name" : "Saracen",
  "nickname" : "Saracen",
  "resource_state" : 3,
  "retired" : false,
  "distance" : 1321925,
  "converted_distance" : 1321.9,
  "brand_name" : null,
  "model_name" : null,
  "frame_type" : 3,
  "description" : "",
  "weight" : 14
}
I want to import that into a table in my database that corresponds to that document.
CREATE TABLE gear 
(gear_id            VARCHAR2(20) NOT NULL
,primary            BOOLEAN
,name               VARCHAR2(60) 
,nickname           VARCHAR2(60) 
,resource_state     INTEGER
,retired            BOOLEAN
,distance_m         INTEGER      
,distance_km        NUMBER       
,brand_name         VARCHAR2(60) 
,model_name         VARCHAR2(60)
--frame_type
,description        CLOB
,weight             NUMBER       
,last_updated       TIMESTAMP DEFAULT SYSTIMESTAMP
,CONSTRAINT gear_pk PRIMARY KEY(gear_id)
);
I can use a JSON duality view to make the JSON document correspond to my table structure, rather than code it explicitly.
My table uses the column GEAR_ID as the primary key, but the Strava JSON document just has an ID. A JSON duality view must have a key value called '_id'.  I will map that to the primary key column in this table.  
All the other name-value pairs each map to the corresponding column in the table.  However, I am not bothering to import 'frame type'
CREATE OR REPLACE JSON RELATIONAL DUALITY VIEW gear_dv AS
SELECT JSON {'_id'    : g.gear_id
,'primary'            : g.primary
,'name'               : g.name
,'nickname'           : g.nickname
,'resource_state'     : g.resource_state
,'retired'            : g.retired
,'distance'           : g.distance_m
,'converted_distance' : g.distance_km
,'brand_name'         : g.brand_name
,'model_name'         : g.model_name
--frame_type
,'description'        : g.description
,'weight'             : g.weight
}
FROM gear g
WITH INSERT UPDATE
/
The JSON document arrives as an HTTP response and is held in a CLOB variable.  That has to be parsed into a JSON object with JSON_OBJECT_T.PARSE(). Then I either update an existing record or insert a new one.  In either case, that is done via the duality view.  Note that
  • The Strava id name is updated to _id to match the duality view, and it has to be _id.  
  • I have removed the frame_type and notification_distance name-value pairs.
  j_obj := JSON_OBJECT_T.parse(l_clob);
  l_id  := j_obj.get_string('id');

  BEGIN
    SELECT * INTO r_gear FROM gear WHERE gear_id = p_gear_id FOR UPDATE;
  EXCEPTION
    WHEN no_data_found THEN null;
  END;
  
  IF r_gear.gear_id = p_gear_id THEN
    UPDATE gear_dv d
    SET    d.data = JSON_TRANSFORM
           (value
           ,RENAME '$.id' = '_id'
           ,REMOVE '$.frame_type'
           ,REMOVE '$.notification_distance'
           )
    FROM JSON_TABLE(
           l_clob,
           '$[*]'
           COLUMNS (
             value CLOB FORMAT JSON PATH '$'
           ))
    WHERE d.data."_id" = l_id;
  ELSE
    INSERT INTO gear_dv
    SELECT JSON_TRANSFORM
           (value
           ,RENAME '$.id' = '_id'
           ,REMOVE '$.frame_type'
           ,REMOVE '$.notification_distance'
           )
    FROM JSON_TABLE(
           l_clob,
           '$[*]'
           COLUMNS (
             value CLOB FORMAT JSON PATH '$'
           )
     );
  END IF;
This approach works well where the JSON document structure closely matches the database table structure, and where I don't have to convert the inbound data with any function.  
However, if, for example, the name of the gear had to be upper case, I might put that into the definition of the duality view, thus
CREATE OR REPLACE JSON RELATIONAL DUALITY VIEW gear_dv AS
SELECT JSON {'_id'    : g.gear_id
,'name'               : UPPER(g.name)
…
}
FROM gear g WITH INSERT UPDATE
/
But then I would not be able to insert name via the duality view.  I wouldn't get an error, but it simply wouldn't process the name column.   It would be null after the insert, or would not be updated.
While the duality view is a very elegant way to map the data, it has limitations.  As soon as you need to transform data during the import, you probably have to go back to coding the mapping for each name-value pair.

See also 

Convert each name-value pair

The more conventional approach is to copy each name-value pair to a column using one of the get functions, sometimes passing the value through a function, and possibly with logic to determine whether to copy the data.
Here, I have used a row type variable and selected the whole current row from the database before updating data values 
BEGIN
    SELECT * INTO r_activities 
    FROM   activities 
    WHERE  activity_id = p_activity_id 
    FOR UPDATE;
  EXCEPTION
    WHEN no_data_found THEN r_activities.activity_id := p_activity_id;
  END;
…
  j_obj := JSON_OBJECT_T.parse(l_clob);

  r_activities.activity_id       := j_obj.get_number('id');
  r_activities.athlete_id        := j_obj.get_object('athlete').get_number('id');
  r_activities.start_date_utc    := iso8601_utc(j_obj.get_string('start_date'));
  r_activities.start_date_local  := iso8601_tz(j_obj.get_string('start_date_local'), j_obj.get_string('timezone'));
…
  r_activities.distance_km       := j_obj.get_number('distance')/1000;

  r_activities.gear_id           := j_obj.get_string('gear_id');
  IF r_activities.type IN('Ride','Walk','Hike','VirtualRide','Run') THEN
    j_subobj                     := j_obj.get_object('gear');
    IF j_subobj IS NOT NULL THEN
      r_activities.gear_name     := j_subobj.get_string('name');
    END IF;
  END IF;
…
  r_activities.photo_count       := j_obj.get_object('photos').get_number('count');
…
Then the entire row can be inserted or updated at the end from the row-type variable.
  BEGIN  
    INSERT INTO activities VALUES p_activities;
    dbms_output.put_line(sql%rowcount||' activity inserted');
    COMMIT;

  EXCEPTION 
    WHEN DUP_VAL_ON_INDEX THEN
      UPDATE activities
      SET ROW = p_activities
      WHERE  activity_id = p_activities.activity_id;
      dbms_output.put_line(sql%rowcount||' activity updated');   
      COMMIT;
  END;

Extract Values from JSON in Virtual Columns

The other option is to store the JSON in a CLOB column in the database and convert it on demand via virtual columns.  Whenever I log a new Strava activity or update, or delete an existing activity, I have subscribed to receive a message from Strava. That message is received by the database using a REST service.  
The message from Strava just tells me that an activity has been created, updated or deleted.  Then I have to process it.  Sometimes, I get multiple messages for the same activity in quick succession.
{
    "aspect_type": "update",
    "event_time": 1516126040,
    "object_id": 1360128428,
    "object_type": "activity",
    "owner_id": 134815,
    "subscription_id": 120475,
    "updates": {
        "title": "Messy"
    }
}
Strava requires that the REST service respond within 2 seconds, so any processing in it must be kept light.  I want to avoid:
  • spending time converting the JSON data while the REST service is running,
  • any malformed or unexpected variation in JSON causing an error in the REST service,
  • concurrent processing of different requests relating to the same activity causing one REST service handler to block another.  
Therefore, my REST service just stores the JSON in a CLOB column on a table and then triggers a scheduler job to process the message.  The subsequent processing needs to access the name-values in the JSON, so I have created virtual columns on the queue table that will only be evaluated on demand.
CREATE TABLE webhook_events
(ID                NUMBER GENERATED ALWAYS AS IDENTITY
,PAYLOAD           CLOB
,processing_status NUMBER DEFAULT 0 NOT NULL
…
,CONSTRAINT webhook_events_pk PRIMARY KEY (id)
);

ALTER TABLE webhook_events ADD aspect_type      GENERATED ALWAYS AS (JSON_VALUE(payload, '$."aspect_type"')) VIRTUAL;
ALTER TABLE webhook_events ADD object_type      GENERATED ALWAYS AS (JSON_VALUE(payload, '$."object_type"')) VIRTUAL;
ALTER TABLE webhook_events ADD object_id NUMBER GENERATED ALWAYS AS (JSON_VALUE(payload, '$."object_id"'  )) VIRTUAL;
In Strava, times are held in Unix 'Epoch Time' (the number of non-leap seconds since midnight UTC on 1st Jan 1970).  I have created a deterministic PL/SQL function to convert it to an Oracle timestamp and have referenced it in my virtual column definition.  
One virtual column cannot reference another.  So, I could not reference the virtual column EVENT_TIME in another virtual column EVENT_TIMESTAMP.  Instead, I had to reference the event_time name-value pair in both column definitions.
CREATE OR REPLACE FUNCTION strava.epoch_to_tstz 
(p_epoch_seconds IN NUMBER
) RETURN TIMESTAMP DETERMINISTIC IS
BEGIN
  RETURN TO_TIMESTAMP_TZ('1970-01-01 00:00:00 UTC', 'YYYY-MM-DD HH24:MI:SS TZR') 
       + NUMTODSINTERVAL(p_epoch_seconds, 'SECOND');
END epoch_to_tstz;
/

ALTER TABLE webhook_events ADD event_time NUMBER 
   GENERATED ALWAYS AS (JSON_VALUE(payload, '$."event_time"' )) VIRTUAL;
ALTER TABLE webhook_events ADD event_timestamp TIMESTAMP WITH TIME ZONE 
   GENERATED ALWAYS AS (epoch_to_tstz(JSON_VALUE(payload, '$."event_time"'))) VIRTUAL;
…
If the message is an update, it contains a JSON object listing the updated items and their new values.  The column updates contains this JSON document.
ALTER TABLE webhook_events ADD updates GENERATED ALWAYS AS (JSON_QUERY(payload, '$."updates"' )) VIRTUAL;
I can now reference the virtual columns in SQL in the queue handler without converting and storing the values in regular columns.
…
  FOR i IN ( --interate requests
    SELECT h.*, a.activity_id
    FROM webhook_events h
      LEFT OUTER JOIN activities a ON a.activity_id = h.object_id 
    WHERE h.processing_status = 0
    AND   h.object_type = 'activity'
    ORDER BY h.id
    FOR UPDATE OF h.processing_status 
  ) LOOP
…

Wednesday, March 25, 2026

Oracle 23ai/26ai: The New RETURNING Clause for the MERGE Statement

This blog is part of a series about aspects and features of Oracle 26 and Autonomous Database. 

The SQL MERGE statement was introduced in Oracle version 9i, allowing what is sometimes called UPSERT logic: a single SQL statement that conditionally inserts or updates rows.  However, one limitation remained.  Unlike INSERT, UPDATE, and DELETE, the MERGE statement did not support the RETURNING clause.  Oracle 23ai/26ai removes this restriction. Developers can now use the RETURNING clause directly in MERGE statements to retrieve values of affected rows. 

The Problem Before Oracle 23

Before Oracle 23, I would have to code a query loop capturing the values that were going to be updated and then update them in separate statements within the loop with additional exception handling as required.
…
  l_rows_processed := FALSE;

  FOR s IN (
    SELECT a.activity_id
    ,      listagg(DISTINCT ma.name,', ') WITHIN GROUP (ORDER BY ma.area_level, ma.name) area_list
    FROM   activities a
      INNER JOIN activity_areas aa ON a.activity_id = aa.activity_id
      INNER JOIN my_areas ma ON ma.area_code = aa.area_code and ma.area_number = aa.area_number
    WHERE a.activity_id = p_activity_id
    AND a.processing_status = 4
    And ma.matchable = 1
    GROUP BY a.activity_id;
  ) LOOP
    l_rows_processed := TRUE;

    UPDATE activities u
    SET    u.area_list = s.area_list
    WHERE  u.activity_id = s.activity_id

    update_activity_description(l_new_area_list,l_description);
  END LOOP;

  IF NOT l_rows_processed THEN 
    RAISE e_activity_not_found;
  END IF;
…

New Syntax in Oracle 23/26

Alternatively, I can use the MERGE statement to write a single SQL statement to generate the new value for a column and then update it in one go.  Now, the return clause also captures that new value in a variable that can be passed to another procedure.
MERGE INTO activities u
  USING (
    SELECT a.activity_id
    ,      listagg(DISTINCT ma.name,', ') WITHIN GROUP (ORDER BY ma.area_level, ma.name) area_list
    FROM   activities a
      INNER JOIN activity_areas aa on a.activity_id = aa.activity_id
      INNER JOIN my_areas ma on ma.area_code = aa.area_code and ma.area_number = aa.area_number
    WHERE a.activity_id = p_activity_id
    AND a.processing_status = 4
    AND ma.matchable = 1
    GROUP BY a.activity_id
  ) S 
  ON (s.activity_id = u.activity_id)
  WHEN MATCHED THEN UPDATE 
  SET u.area_list = s.area_list
  RETURNING new area_list INTO l_new_area_list; --new in Oracle 23
  
  IF SQL%ROWCOUNT = 0 THEN 
    RAISE e_activity_not_found;
  ELSE
    update_activity_description(l_new_area_list,l_description);
  END IF;

The benefits are
  • Less and simpler code, which ought therefore to be easier to test and maintain, requiring less additional logic and exception handling.
  • Fewer SQL statements and therefore fewer context switches between PL/SQL and SQL.
Just like the return clause on UPDATE and DELETE, it is also possible to
  • Reference new and/or old column values,
  • Single values into a scalar (single value) variable
  • Bulk collect multiple rows into an array variable
  • Aggregate multiple rows into a scalar variable
I am far from the first to blog about this feature, but it deserves to be better known.

See also:

Tuesday, March 24, 2026

ChatGPT & Oracle Development

This blog is part of a series about aspects and features of Oracle 26 and Autonomous Database.

TL;DR

This is an opinion piece about the impact of AI on developers and administrators.  I'll tell you my opinion here at the start:  

AI won't be replacing us, at least not yet, but we may be replaced by someone who is more productive because they are using AI!  I certainly advocate using it, but do so thoughtfully.  Consider whether the answers are sensible, and then test them carefully.

Introduction

To learn more about Oracle 26ai Autonomous Database, I returned to a project I created in 2021 to explore spatial data.  I had exported my activity data from Strava as flat files and then imported them into an Oracle database.  

Now, I have migrated that project to an Autonomous database on OCI and integrated it directly with Strava through their API.  Notifications of new activities are received via a REST service, some processing is done in the Oracle database, and results are written back to the Strava activity description.  All quite simple, but it made me use techniques and technologies that I have never used before.

ChatGPT

When I created the original project in 2021, I had the Oracle documentation and Google.  I had to design and write every bit of code myself.  It all took time.

Now, I have been able to use ChatGPT (other AI Chatbots are available, but this is where I started), and the effect has been remarkable.  I have been pointed at features and techniques that are new to me, and often I have been given a concrete example to start work on, and therefore I have learned about them.

I asked ChatGPT questions in plain language about the details of both Oracle 26 and the Strava API, and it gave me sensible answers in plain language that were generally sensible.  In some cases, it designed complete processing flows; sometimes it just illustrated the answer with code examples.  I could ask follow-up questions, and it would answer them in the context of the earlier question, refining its response.  It became a genuine conversation.  Though it is not going to pass the Turing Test!

ChatGPT's answers were mostly accurate, though some of its generated code was not always completely correct.  On some subjects, such as character set, we went round in circles.  Sometimes, I would point out mistakes, and it would say 'Yes, you are right!' or 'Well spotted!'.  I am not convinced it learnt anything from that.  Over time, I learnt that I needed to ask quite precise questions, otherwise it would go off in other directions.  Nevertheless, I found I got very quickly from a first draft of code to debugging almost working code.  I have no doubt that using ChatGPT increased my productivity.  If I had to quantify the effect, I would estimate that it improved my productivity by a factor of about 3.

These are some of my early questions to ChatGPT:

  • "How can Strava notify my Oracle database, using only PL/SQL, that an activity has been added, deleted or updated?" 
    • The result included a complete design for creating a Strava webhook to send an HTTP message to a REST service, including a database data model design and how to process it by calling the Strava API to extract the activity data

  • "How would I load GeoJSON … into an Oracle spatial data object geometry in an Oracle autonomous database using just PL/SQL"
    • I got a complete PL/SQL procedure to extract the GeoJSON from the data.gov.ie website, and then how to read the GeoJSON into an Oracle spatial geometry.
      • I was able to ask follow-up questions.   When one particular public data set produced errors from sdo_util.from_geojson, after a few other suggestions, ChatGPT provided a complete alternative PL/SQL procedure to create a spatial geometry from just the array of coordinates.  It is slower, but it works reliably.  I use it as an alternative when I get an error from the Oracle function.

There were some notable examples of code that ChatGPT produced correctly the first time, and much faster than I could have.  In particular, extracting all the data in a Strava activity (see strava_http.get_activity_stream) as both an Oracle spatial geometry and a GPX file, including heart monitor, cadence and power meter data if also present (that must conform to the Topographix and Garmin XML schemas).  My code is on GitHub, so you can judge the result for yourself!

Nullius in Verba

This motto (it can be translated as "Take Nobody's Word for It!") is at the heart of the scientific principle.  It can usefully be applied to many things, and certainly to ChatGPT.  

ChatGPT is a hugely powerful tool that seems to be capable of answering any reasonable query.  I would encourage anyone to use it to help develop code faster.  However, every response should be treated with healthy scepticism and be tested carefully.   Whether code compiles and executes is a straightforward question with an essentially binary answer.  Whether that code then does what it is supposed to do requires thorough testing, but then so does human-written code!

Nonetheless, I am hugely impressed by ChatGPT.  I have no doubt that I got further and got there much faster than I ever would otherwise! 

Monday, March 23, 2026

Job Classes on Autonomous Database

This blog is part of a series about aspects and features of Oracle 26 and Autonomous Database.

I have written about using Job Classes with the database scheduler.  It is essentially the same on Autonomous database, but some configuration is delivered by Oracle.  You may choose to use it directly as delivered.  However, I suggest using it as the basis for a custom configuration.

The Autonomous Transaction Processing (ATP) database is delivered with 5 consumer groups and 5 corresponding job classes that map to them.  
OWNER JOB_CLASS_NAME RESOURCE_CONSUMER_GROUP SERVICE
----- -------------- ----------------------- ------------------------------------------------------
LOGGING_LEVEL LOG_HISTORY COMMENTS                                
------------- ----------- ----------------------------------------
SYS    TPURGENT      TPURGENT                GE***********09_GOFASTER1_tpurgent.adb.oraclecloud.com 
RUNS                      Urgent transaction processing jobs     

SYS    TP            TP                      GE***********09_GOFASTER1_tp.adb.oraclecloud.com     
RUNS                      Transaction processing jobs            

SYS    HIGH          HIGH                    GE***********09_GOFASTER1_high.adb.oraclecloud.com   
RUNS                      High priority jobs                     

SYS    MEDIUM        MEDIUM                  GE***********09_GOFASTER1_medium.adb.oraclecloud.com 
RUNS                      Medium priority jobs                   

SYS    LOW           LOW                     GE***********09_GOFASTER1_low.adb.oraclecloud.com    
RUNS                      Low priority jobs                      
It is easy and perfectly reasonable to allocate these delivered job classes to scheduler jobs.  However, these job classes cannot be changed, even by the ADMIN user.
BEGIN dbms_Scheduler.set_attribute('SYS.TPURGENT', 'comments', 'A Comment'); END;
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_ISCHED", line 3513
ORA-06512: at "SYS.DBMS_SCHEDULER", line 3460
ORA-06512: at line 1

Note that the service names are different and unique to every autonomous database.  I have been careful to avoid hard-coding this anywhere within my scripts and code.  Instead, I duplicate the delivered job classes and then alter as necessary.  Thus, each job or group of jobs has its own job class.  I prefer to manage the job and the job scheduler, as far as possible, within a packaged procedure.  This has several advantages.

  • The right version is always available because it has been installed into the database and can be migrated like any other version-controlled source code.  This also covers when a database has been cloned, restored or flashed back.  This saves looking for the right version of the right script.  
  • Jobs can be created and managed by a user who does not have access to manage the job scheduler, but who can execute procedures in the package.
  • I have created a procedure to clone a job class and adjust attributes as required.
  • The code is available on GitHub.
CREATE OR REPLACE PACKAGE BODY strava.strava_job AS
...
e_job_already_exists EXCEPTION;
PRAGMA EXCEPTION_INIT(e_job_already_exists,-27477);
...
PROCEDURE create_job_class
(p_job_class_name          all_scheduler_job_classes.job_class_name%TYPE
,p_based_on_job_class      all_scheduler_job_classes.job_class_name%TYPE
,p_resource_consumer_group all_scheduler_job_classes.resource_consumer_group%TYPE DEFAULT NULL
,p_service                 all_scheduler_job_classes.service%TYPE                 DEFAULT NULL
,p_logging_level           all_scheduler_job_classes.logging_level%TYPE           DEFAULT NULL
,p_log_history             all_scheduler_job_classes.log_history%TYPE             DEFAULT NULL
,p_comments                all_scheduler_job_classes.comments%TYPE                DEFAULT NULL)
IS
  r_job_class all_scheduler_job_classes%ROWTYPE;
...
BEGIN
...  
  SELECT * INTO r_job_class FROM all_scheduler_job_classes
  WHERE owner = 'SYS' AND job_class_name = p_based_on_job_class;
  
  BEGIN
    DBMS_SCHEDULER.CREATE_JOB_CLASS(p_job_class_name); 
  EXCEPTION WHEN e_job_already_exists THEN NULL;
  END;
  
  IF p_resource_consumer_group IS NOT NULL THEN r_job_class.resource_consumer_group := p_resource_consumer_group; END IF;
  IF p_service                 IS NOT NULL THEN r_job_class.service := p_service; END IF;
  IF p_logging_level           IS NOT NULL THEN r_job_class.logging_level := p_logging_level; END IF;
  IF p_log_history             IS NOT NULL THEN r_job_class.log_history := p_log_history; END IF;
  IF p_comments                IS NOT NULL THEN r_job_class.comments := p_comments; END IF;
  
  dbms_Scheduler.set_attribute(p_job_class_name, 'resource_consumer_group', r_job_class.resource_consumer_group);
  dbms_Scheduler.set_attribute(p_job_class_name, 'service'                , r_job_class.service);
  IF    r_job_class.logging_level = 'OFF'         THEN dbms_Scheduler.set_attribute(p_job_class_name, 'logging_level', DBMS_SCHEDULER.LOGGING_OFF);
  ELSIF r_job_class.logging_level = 'RUNS'        THEN dbms_Scheduler.set_attribute(p_job_class_name, 'logging_level', DBMS_SCHEDULER.LOGGING_RUNS);
  ELSIF r_job_class.logging_level = 'FAILED RUNS' THEN dbms_Scheduler.set_attribute(p_job_class_name, 'logging_level', DBMS_SCHEDULER.LOGGING_FAILED_RUNS);
  ELSIF r_job_class.logging_level = 'FULL'        THEN dbms_Scheduler.set_attribute(p_job_class_name, 'logging_level', DBMS_SCHEDULER.LOGGING_FULL);
  END IF;
  dbms_Scheduler.set_attribute(p_job_class_name, 'log_history'            , r_job_class.log_history);
  dbms_Scheduler.set_attribute(p_job_class_name, 'comments'               , r_job_class.comments);
...
EXCEPTION 
  WHEN no_data_found THEN
...
    RAISE;
END create_job_class;
This new procedure is called from the procedures that create jobs.  In the example below, the LOW job class is cloned into a new PURGE_API_LOG_CLASS that is used by the PURGE_API_LOG job.  I have set the log history retention to 7 days, but all other settings remain the same.  
PROCEDURE create_purge_api_log_job
IS
  k_job_name  CONSTANT VARCHAR2(128 CHAR) := 'STRAVA.PURGE_API_LOG';
  k_job_class CONSTANT VARCHAR2(128 CHAR) :=    'SYS.PURGE_API_LOG_CLASS';
BEGIN
...
  create_job_class(k_job_class,'LOW', p_log_history=>7);
  BEGIN
    dbms_scheduler.create_job(
    (job_name => k_job_name
    ,job_type => 'STORED_PROCEDURE'
    ,job_action => 'STRAVA.STRAVA_HTTP.PURGE_API_LOG'
    ,enabled => FALSE
    );
  EXCEPTION WHEN e_job_already_exists THEN NULL;
  END;
...
  dbms_scheduler.set_attribute(name => k_job_name, attribute => 'JOB_CLASS', value => k_job_class);
...
  dbms_scheduler.enable(name => k_job_name);
...
END create_purge_api_log_job;
...
END strava_job;
/

Now I have several job classes 

OWNER JOB_CLASS_NAME                        RESOURCE_CON SERVICE
----- ------------------------------------- ------------ ------------------------------------------------------
LOGGING_LEVEL LOG_HISTORY COMMENTS                      
------------- ----------- ------------------------------
SYS  CREATE_ACTIVITY_HSEARCH_UPD_ALL_CLASS  LOW          GE***********09_GOFASTER1_low.adb.oraclecloud.com    
RUNS                   7 Low priority jobs            

SYS  ACTIVITY_AREA_LIST_UPD_ALL_CLASS        LOW         GE***********09_GOFASTER1_low.adb.oraclecloud.com    
RUNS                   7 Low priority jobs            

SYS  PURGE_API_LOG_CLASS                     LOW         GE***********09_GOFASTER1_low.adb.oraclecloud.com    
RUNS                   7 Low priority jobs            

SYS  PURGE_EVENT_QUEUE_CLASS                 LOW         GE***********09_GOFASTER1_low.adb.oraclecloud.com    
RUNS                   7 Low priority jobs            

SYS  BATCH_LOAD_ACTIVITIES_CLASS             MEDIUM      GE***********09_GOFASTER1_medium.adb.oraclecloud.com 
RUNS                   7 Medium priority jobs         

SYS  UPDATE_STRAVA_ACTIVTY_CLASS             MEDIUM      GE***********09_GOFASTER1_medium.adb.oraclecloud.com 
RUNS                   7 Medium priority jobs         

SYS  PROCESS_WEBHOOK_QUEUE_CLASS             MEDIUM      GE***********09_GOFASTER1_medium.adb.oraclecloud.com 
RUNS                   7 Medium priority jobs         

SYS  RENEW_STRAVA_TOKENS_CLASS               HIGH        GE***********09_GOFASTER1_high.adb.oraclecloud.com   
RUNS                   7 High priority jobs
Each job has been allocated to a different job class.  In future, I can control the behaviour of each job by adjusting the job class.
OWNER  JOB_NAME                             JOB_CLASS                            
------ ------------------------------------ -------------------------------------
STRAVA ACTIVITY_AREA_LIST_UPD_ALL_JOB      ACTIVITY_AREA_LIST_UPD_ALL_CLASS 
STRAVA BATCH_LOAD_ACTIVITIES_JOB           BATCH_LOAD_ACTIVITIES_CLASS 
STRAVA CREATE_ACTIVITY_HSEARCH_UPD_ALL_JOB CREATE_ACTIVITY_HSEARCH_UPD_ALL_CLASS
STRAVA PROCESS_WEBHOOK_QUEUE_JOB           PROCESS_WEBHOOK_QUEUE_CLASS 
STRAVA PURGE_API_LOG                       PURGE_API_LOG_CLASS 
STRAVA PURGE_EVENT_QUEUE                   PURGE_EVENT_QUEUE_CLASS 
STRAVA RENEW_STRAVA_TOKENS_JOB             RENEW_STRAVA_TOKENS_CLASS 
STRAVA UPDATE_STRAVA_ACTIVTY_JOB           UPDATE_STRAVA_ACTIVTY_CLASS

Thursday, February 12, 2026

Demonstration of Parallel Execution tasks (DBMS_PARALLEL_EXECUTE) with Job Class

This post is the last part in a series that discusses the use of job classes with the Oracle database scheduler.
  1. Job Classes and the Database Scheduler
  2. Demonstration of Scheduler Jobs (DBMS_SCHEDULER) with Job Class
  3. Demonstration of Parallel Execution tasks (DBMS_PARALLEL_EXECUTE) with Job Clas
A job class can be assigned to all the tasks created when using DBMS_PARALLEL_EXECUTE to run a piece of SQL in concurrent scheduler jobs.  The privileges are the same as required for DBMS_SCHEDULER jobs.

Demonstration Slow Function

This time, I need a slow function that I can call in my SQL
CREATE OR REPLACE FUNCTION GFC_BATCH_USER.my_slow_fn(p_n IN INTEGER) RETURN INTEGER IS 
  l_num_rows INTEGER;
BEGIN
  with t as (select /*+MATERIALIZE*/ level n FROM dual CONNECT BY LEVEL <= p_n)
  select count(*) INTO l_num_rows from t
  connect by n > prior n
  start with n = 1;
  dbms_output.put_line('my_slow_fn('||p_n||'):'||l_num_rows||' rows returned');
  RETURN l_num_rows;
END;
/
select GFC_BATCH_USER.my_slow_fn(20) from dual
/

  GFC_BATCH_USER.MY_SLOW_FN(20)
-------------------------------
                         524288

my_slow_fn(20):524288 rows returned

Setup Table for Parallel SQL

I am going to create a table with 30 rows, each row will create a job that is executed in parallel up to the maximum number of concurrent jobs.  
alter session set current_schema=GFC_BATCH_USER  ;
create table t (a number, b number, c date, d number, constraint t_pk primary key(a));
truncate table t; 
insert into t (a) select level from dual connect by level <= 30;
commit;
select * from t;

         A          B C                                     D
---------- ---------- ---------------------------- ----------
         1
         2
         3
...
        29                                                   
        30

Executing the Task

During this test, JOB_QUEUE_PROCESSES was set to 6, and is running on a RAC system with 2 nodes. I will get 6 jobs on each of the 2 instances, 12 in all, and the 30 requests will be distributed across them.

Note that the job class is specified when the task is run with DBMS_PARALLEL_EXECUTE.RUN_TASK.

exec DBMS_PARALLEL_EXECUTE.STOP_TASK('myslowtask');
exec DBMS_PARALLEL_EXECUTE.DROP_TASK('myslowtask');

DECLARE
  l_sql_stmt1 VARCHAR2(1000) := 'SELECT a, a FROM GFC_BATCH_USER.t';
  l_sql_stmt2 VARCHAR2(1000) := 'UPDATE GFC_BATCH_USER.t SET b = GFC_BATCH_USER.my_slow_fn(24 + mod(a,5)), c=SYSDATE, d=SYS_CONTEXT(''USERENV'',''SID'') WHERE a BETWEEN :start_id AND :end_id';
  l_try NUMBER := 0;
  l_status NUMBER;
  l_task_name VARCHAR2(30) := 'myslowtask';
BEGIN
  -- Create the TASK
  DBMS_PARALLEL_EXECUTE.CREATE_TASK(l_task_name);

  DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_NUMBER_COL(task_name=>l_task_name, table_owner=>'GFC_BATCH_USER', table_name=>'T', table_column=>'A', chunk_size=>1);

  -- Execute the DML in parallel
  DBMS_PARALLEL_EXECUTE.RUN_TASK(l_task_name, l_sql_stmt2, DBMS_SQL.NATIVE, parallel_level=>30, job_class=>'MY_JOB_CLASS');
END;
/
The session that submits the jobs waits for all of them to complete.
PL/SQL procedure successfully completed.

Elapsed: 00:01:25.706

Verifying the Task

After the parallel execution has completed, we can see that each row in the table has been updated.
  • B is the number of rows returned from the function
  • C is the timestamp of the update, which occurs when the slow function completes.
  • D is the job's session ID.  
select * from GFC_BATCH_USER.t
/

         A          B C                          D
---------- ---------- ----------------- ----------
         1   16777216 27/11/25 22:04:40        656
         2   33554432 27/11/25 22:04:40        340
         3   67108864 27/11/25 22:04:40        184
         4  134217728 27/11/25 22:04:40        811
         5    8388608 27/11/25 22:04:41        798
         6   16777216 27/11/25 22:04:41        502
         7   33554432 27/11/25 22:04:45        798
         8   67108864 27/11/25 22:04:46        799
         9  134217728 27/11/25 22:04:46        351
        10    8388608 27/11/25 22:04:46        189
        11   16777216 27/11/25 22:04:46         39
        12   33554432 27/11/25 22:04:46        196
        13   67108864 27/11/25 22:04:46        325
        14  134217728 27/11/25 22:04:49        656
        15    8388608 27/11/25 22:04:50        502
        16   16777216 27/11/25 22:04:51        189
        17   33554432 27/11/25 22:04:55        502
        18   67108864 27/11/25 22:04:55         39
        19  134217728 27/11/25 22:04:59        340
        20    8388608 27/11/25 22:05:00        189
        21   16777216 27/11/25 22:05:03        798
        22   33554432 27/11/25 22:05:05        196
        23   67108864 27/11/25 22:05:05        189
        24  134217728 27/11/25 22:05:13        798
        25    8388608 27/11/25 22:05:14        502
        26   16777216 27/11/25 22:05:18        502
        27   33554432 27/11/25 22:05:19        184
        28   67108864 27/11/25 22:05:24        196
        29  134217728 27/11/25 22:05:25        325
        30    8388608 27/11/25 22:05:25        799

DBA_PARALLEL_EXECUTE_TASKS

The task is completed, but has not been dropped.  So, I can see the job prefix, the job class and the SQL statement.
SELECT task_owner, task_name, chunk_type, status, table_owner, table_name, number_column
, job_prefix, job_class, sql_stmt FROM dba_parallel_execute_tasks;

TASK_OWNER           TASK_NAME  CHUNK_TYPE   STATUS     TABLE_OWNER          TABLE_NAME Column     JOB_PREFIX   JOB_CLASS   
-------------------- ---------- ------------ ---------- -------------------- ---------- ---------- ------------ ------------
SQL_STMT                                                                                                                
------------------------------------------------------------------------------------------------------------------------
GFC_BATCH_USER       myslowtask NUMBER_RANGE FINISHED   GFC_BATCH_USER       T          A          TASK$_184314 MY_JOB_CLASS
UPDATE GFC_BATCH_USER.t SET b = GFC_BATCH_USER.my_slow_fn(24+mod(a,5)), c=SYSDATE, d=SYS_CONTEXT('USERENV','SID') WH
ERE a BETWEEN :start_id AND :end_id

USER_PARALLEL_EXECUTE_CHUNKS

SELECT chunk_id, task_name, status, start_id, end_id, job_name
FROM user_parallel_execute_chunks;

  CHUNK_ID TASK_NAME  STATUS       START_ID     END_ID JOB_NAME            
---------- ---------- ---------- ---------- ---------- --------------------
    620725 myslowtask PROCESSED           1          1 TASK$_184314_1      
    620726 myslowtask PROCESSED           2          2 TASK$_184314_3      
    620727 myslowtask PROCESSED           3          3 TASK$_184314_7      
    620728 myslowtask PROCESSED           4          4 TASK$_184314_5      
    620729 myslowtask PROCESSED           5          5 TASK$_184314_11     
    620730 myslowtask PROCESSED           6          6 TASK$_184314_9      
    620731 myslowtask PROCESSED           7          7 TASK$_184314_11     
    620732 myslowtask PROCESSED           8          8 TASK$_184314_10     
    620733 myslowtask PROCESSED           9          9 TASK$_184314_2      
    620734 myslowtask PROCESSED          10         10 TASK$_184314_12     
    620735 myslowtask PROCESSED          11         11 TASK$_184314_6      
    620736 myslowtask PROCESSED          12         12 TASK$_184314_4      
    620737 myslowtask PROCESSED          13         13 TASK$_184314_8      
    620738 myslowtask PROCESSED          14         14 TASK$_184314_1      
    620739 myslowtask PROCESSED          15         15 TASK$_184314_9      
    620740 myslowtask PROCESSED          16         16 TASK$_184314_12     
    620741 myslowtask PROCESSED          17         17 TASK$_184314_9
    620742 myslowtask PROCESSED          18         18 TASK$_184314_6
    620743 myslowtask PROCESSED          19         19 TASK$_184314_3
    620744 myslowtask PROCESSED          20         20 TASK$_184314_12
    620745 myslowtask PROCESSED          21         21 TASK$_184314_11
    620746 myslowtask PROCESSED          22         22 TASK$_184314_4
    620747 myslowtask PROCESSED          23         23 TASK$_184314_12
    620748 myslowtask PROCESSED          24         24 TASK$_184314_11
    620749 myslowtask PROCESSED          25         25 TASK$_184314_9
    620750 myslowtask PROCESSED          26         26 TASK$_184314_9
    620751 myslowtask PROCESSED          27         27 TASK$_184314_7
    620752 myslowtask PROCESSED          28         28 TASK$_184314_4
    620753 myslowtask PROCESSED          29         29 TASK$_184314_8
    620754 myslowtask PROCESSED          30         30 TASK$_184314_10
Once the jobs are finished, ALL_SCHEDULER_RUNNING_JOBS returns no rows
select * from all_Scheduler_running_jobs
/

ALL_SCHEDULER_JOB_LOG

The scheduler job log reports that the job ran and records the job class.
select log_id, log_date, owner, job_name, job_class, operation, status
from   all_scheduler_job_log
where  job_name IN (SELECT job_name from user_parallel_execute_chunks)
order by log_date desc
/

    LOG_ID LOG_DATE                        OWNER                JOB_NAME             JOB_CLASS    OPERATION  STATUS    
---------- ------------------------------- -------------------- -------------------- ------------ ---------- ----------
   1432798 27/11/25 22.06.03.722515000 GMT GFC_BATCH_USER       TASK$_184314_8       MY_JOB_CLASS RUN        SUCCEEDED 
   1432840 27/11/25 22.05.59.990596000 GMT GFC_BATCH_USER       TASK$_184314_11      MY_JOB_CLASS RUN        SUCCEEDED 
   1432838 27/11/25 22.05.56.539066000 GMT GFC_BATCH_USER       TASK$_184314_3       MY_JOB_CLASS RUN        SUCCEEDED 
   1432832 27/11/25 22.05.52.895848000 GMT GFC_BATCH_USER       TASK$_184314_1       MY_JOB_CLASS RUN        SUCCEEDED 
   1432790 27/11/25 22.05.49.598773000 GMT GFC_BATCH_USER       TASK$_184314_2       MY_JOB_CLASS RUN        SUCCEEDED 
   1432788 27/11/25 22.05.47.204949000 GMT GFC_BATCH_USER       TASK$_184314_4       MY_JOB_CLASS RUN        SUCCEEDED 
   1432836 27/11/25 22.05.42.118069000 GMT GFC_BATCH_USER       TASK$_184314_5       MY_JOB_CLASS RUN        SUCCEEDED 
   1432796 27/11/25 22.05.39.495460000 GMT GFC_BATCH_USER       TASK$_184314_12      MY_JOB_CLASS RUN        SUCCEEDED 
   1432834 27/11/25 22.05.37.321227000 GMT GFC_BATCH_USER       TASK$_184314_7       MY_JOB_CLASS RUN        SUCCEEDED 
   1432792 27/11/25 22.05.34.397795000 GMT GFC_BATCH_USER       TASK$_184314_6       MY_JOB_CLASS RUN        SUCCEEDED 
   1432794 27/11/25 22.05.30.552796000 GMT GFC_BATCH_USER       TASK$_184314_10      MY_JOB_CLASS RUN        SUCCEEDED 
   1432842 27/11/25 22.05.28.175639000 GMT GFC_BATCH_USER       TASK$_184314_9       MY_JOB_CLASS RUN        SUCCEEDED

ALL_SCHEDULER_JOB_RUN_DETAILS

select log_id, log_date, owner, job_name, status, req_start_date, actual_start_date, run_duration, instance_id, session_id, slave_pid, cpu_used, output
from   all_scheduler_job_run_details
where  job_name IN (SELECT job_name from user_parallel_execute_chunks)
order by job_name desc
/
The 12 jobs show the 30 calls to my_slow_fn() and how long they took.  Job requests, such as TASK$_184314_9, that happened to pick up the shorter job requests had time to run 5 jobs,  while TASK$_184314_2 processed only a single longer-running job.
    LOG_ID LOG_DATE                        OWNER                JOB_NAME             STATUS     REQ_START_DATE                           
---------- ------------------------------- -------------------- -------------------- ---------- -----------------------------------------
                                                              Inst Session                                                      
ACTUAL_START_DATE                         RUN_DURATION          ID ID         SLAVE_PID                      CPU_USED           
----------------------------------------- ------------------- ---- ---------- ------------------------------ -------------------
OUTPUT                                                                                                                                                
--------------------------------------------------------------------------------------------------------------------------------------------
   1432842 27/11/25 22.05.28.184200000 GMT GFC_BATCH_USER       TASK$_184314_9       SUCCEEDED  27/11/25 22.04.40.949320000 EUROPE/LONDON 
27/11/25 22.04.41.103830000 EUROPE/LONDON +00 00:00:47.000000    1 502,40742  67745                          +00 00:00:43.390000 
my_slow_fn(25):16777216 rows returned                                                                                                                    
my_slow_fn(24):8388608 rows returned                                                                                                                     
my_slow_fn(26):33554432 rows returned                                                                                                                    
my_slow_fn(24):8388608 rows returned                                                                                                                     
my_slow_fn(25):16777216 rows returned                                                                                                                    

   1432798 27/11/25 22.06.03.723089000 GMT GFC_BATCH_USER       TASK$_184314_8       SUCCEEDED  27/11/25 22.04.40.926354000 EUROPE/LONDON 
27/11/25 22.04.46.599530000 EUROPE/LONDON +00 00:01:17.000000    2 325,64741  72626                          +00 00:01:12.800000 
my_slow_fn(27):67108864 rows returned                                                                                                                    
my_slow_fn(28):134217728 rows returned                                                                                                                   

   1432834 27/11/25 22.05.37.321783000 GMT GFC_BATCH_USER       TASK$_184314_7       SUCCEEDED  27/11/25 22.04.40.897917000 EUROPE/LONDON 
27/11/25 22.04.40.902120000 EUROPE/LONDON +00 00:00:56.000000    1 184,51217  67747                          +00 00:00:52.880000 
my_slow_fn(27):67108864 rows returned                                                                                                                    
my_slow_fn(26):33554432 rows returned                                                                                                                    

   1432792 27/11/25 22.05.34.398412000 GMT GFC_BATCH_USER       TASK$_184314_6       SUCCEEDED  27/11/25 22.04.40.879716000 EUROPE/LONDON 
27/11/25 22.04.46.515314000 EUROPE/LONDON +00 00:00:48.000000    2 39,12988   71851                          +00 00:00:43.950000 
my_slow_fn(25):16777216 rows returned                                                                                                                    
my_slow_fn(27):67108864 rows returned                                                                                                                    

   1432836 27/11/25 22.05.42.118559000 GMT GFC_BATCH_USER       TASK$_184314_5       SUCCEEDED  27/11/25 22.04.40.862536000 EUROPE/LONDON 
27/11/25 22.04.40.902486000 EUROPE/LONDON +00 00:01:01.000000    1 811,31076  67743                          +00 00:00:57.780000 
my_slow_fn(28):134217728 rows returned                                                                                                                   

   1432788 27/11/25 22.05.47.205444000 GMT GFC_BATCH_USER       TASK$_184314_4       SUCCEEDED  27/11/25 22.04.40.846000000 EUROPE/LONDON 
27/11/25 22.04.46.515138000 EUROPE/LONDON +00 00:01:01.000000    2 196,61889  71847                          +00 00:00:56.680000 
my_slow_fn(26):33554432 rows returned                                                                                                                    
my_slow_fn(26):33554432 rows returned                                                                                                                    
my_slow_fn(27):67108864 rows returned                                                                                                                    

   1432838 27/11/25 22.05.56.539267000 GMT GFC_BATCH_USER       TASK$_184314_3       SUCCEEDED  27/11/25 22.04.40.827636000 EUROPE/LONDON 
27/11/25 22.04.40.902634000 EUROPE/LONDON +00 00:01:16.000000    1 340,61644  67741                          +00 00:01:11.690000 
my_slow_fn(26):33554432 rows returned                                                                                                                    
my_slow_fn(28):134217728 rows returned                                                                                                                   

   1432790 27/11/25 22.05.49.599028000 GMT GFC_BATCH_USER       TASK$_184314_2       SUCCEEDED  27/11/25 22.04.40.808503000 EUROPE/LONDON 
27/11/25 22.04.46.515308000 EUROPE/LONDON +00 00:01:03.000000    2 351,50039  71845                          +00 00:00:59.390000 
my_slow_fn(28):134217728 rows returned                                                                                                                   

   1432796 27/11/25 22.05.39.496026000 GMT GFC_BATCH_USER       TASK$_184314_12      SUCCEEDED  27/11/25 22.04.41.028696000 EUROPE/LONDON 
27/11/25 22.04.46.555765000 EUROPE/LONDON +00 00:00:53.000000    2 189,41910  72624                          +00 00:00:48.770000 
my_slow_fn(24):8388608 rows returned                                                                                                                     
my_slow_fn(25):16777216 rows returned                                                                                                                    
my_slow_fn(24):8388608 rows returned                                                                                                                     
my_slow_fn(27):67108864 rows returned                                                                                                                    

   1432840 27/11/25 22.05.59.990799000 GMT GFC_BATCH_USER       TASK$_184314_11      SUCCEEDED  27/11/25 22.04.40.998815000 EUROPE/LONDON 
27/11/25 22.04.41.103787000 EUROPE/LONDON +00 00:01:19.000000    1 798,31456  67749                          +00 00:01:15.300000 
my_slow_fn(24):8388608 rows returned                                                                                                                     
my_slow_fn(26):33554432 rows returned                                                                                                                    
my_slow_fn(25):16777216 rows returned                                                                                                                    
my_slow_fn(28):134217728 rows returned                                                                                                                   

   1432794 27/11/25 22.05.30.553345000 GMT GFC_BATCH_USER       TASK$_184314_10      SUCCEEDED  27/11/25 22.04.40.972242000 EUROPE/LONDON 
27/11/25 22.04.46.515430000 EUROPE/LONDON +00 00:00:44.000000    2 799,25589  71849                          +00 00:00:39.410000 
my_slow_fn(27):67108864 rows returned                                                                                                                    
my_slow_fn(24):8388608 rows returned                                                                                                                     

   1432832 27/11/25 22.05.52.896353000 GMT GFC_BATCH_USER       TASK$_184314_1       SUCCEEDED  27/11/25 22.04.40.788698000 EUROPE/LONDON 
27/11/25 22.04.40.793091000 EUROPE/LONDON +00 00:01:12.000000    1 656,39533  67739                          +00 00:01:07.410000 
my_slow_fn(25):16777216 rows returned                                                                                                                    
my_slow_fn(28):134217728 rows returned                                                                                                                   

Verify Consumer Group in Active Session History

ASH can be used to verify that the consumer group really was set for each of these jobs.
with h as (
select user_id, session_id, session_serial#, top_level_sql_id, sql_id, consumer_group_id, module, action
,      min(sample_Time) min_Sample_time, max(Sample_Time) max_sample_time, sum(usecs_per_row)/1e6 ash_Secs
from   gv$active_Session_History h
where  module = 'DBMS_SCHEDULER'
and    action IN (SELECT job_name from user_parallel_execute_chunks)
group by user_id, session_id, session_serial#, top_level_sql_id, sql_id, consumer_group_id, module, action
)
select h.user_id, u.username, h.session_id, h.session_serial#, h.consumer_group_id, cg.consumer_group
,      h.module, h.action, h.ash_Secs, h.min_Sample_time, h.max_sample_time
from   h
  left outer join dba_users u on u.user_id = h.user_id 
  left outer join dba_rsrc_consumer_groups cg on cg.consumer_group_id = h.consumer_Group_id
order by max_sample_time desc
/
It confirms that the consumer group was switched to HIGH_BATCH_GROUP
                               Session Session Consumer                                                                    ASH                                            
   USER_ID USERNAME                  ID Serial# Group ID CONSUMER_GROUP       MODULE               ACTION                  Secs MIN_SAMPLE_TIME       MAX_SAMPLE_TIME      
---------- -------------------- ------- ------- -------- -------------------- -------------------- -------------------- ------- --------------------- ---------------------
       815 GFC_BATCH_USER           325   64741   738019 HIGH_BATCH_GROUP     DBMS_SCHEDULER       TASK$_184314_8         77.18 27/11/25 22.04.46.781 27/11/25 22.06.02.963
       815 GFC_BATCH_USER           798   31456   738019 HIGH_BATCH_GROUP     DBMS_SCHEDULER       TASK$_184314_11        79.02 27/11/25 22.04.41.593 27/11/25 22.05.59.614
       815 GFC_BATCH_USER           340   61644   738019 HIGH_BATCH_GROUP     DBMS_SCHEDULER       TASK$_184314_3         75.02 27/11/25 22.04.41.593 27/11/25 22.05.55.613
       815 GFC_BATCH_USER           656   39533   738019 HIGH_BATCH_GROUP     DBMS_SCHEDULER       TASK$_184314_1         72.02 27/11/25 22.04.41.593 27/11/25 22.05.52.612
       815 GFC_BATCH_USER           351   50039   738019 HIGH_BATCH_GROUP     DBMS_SCHEDULER       TASK$_184314_2         63.18 27/11/25 22.04.46.781 27/11/25 22.05.48.960
       815 GFC_BATCH_USER           196   61889   738019 HIGH_BATCH_GROUP     DBMS_SCHEDULER       TASK$_184314_4         61.18 27/11/25 22.04.46.781 27/11/25 22.05.46.959
       815 GFC_BATCH_USER           811   31076   738019 HIGH_BATCH_GROUP     DBMS_SCHEDULER       TASK$_184314_5         61.02 27/11/25 22.04.41.593 27/11/25 22.05.41.609
       815 GFC_BATCH_USER           189   41910   738019 HIGH_BATCH_GROUP     DBMS_SCHEDULER       TASK$_184314_12        53.18 27/11/25 22.04.46.781 27/11/25 22.05.38.957
       815 GFC_BATCH_USER           184   51217   738019 HIGH_BATCH_GROUP     DBMS_SCHEDULER       TASK$_184314_7         56.01 27/11/25 22.04.41.593 27/11/25 22.05.36.608
       815 GFC_BATCH_USER            39   12988   738019 HIGH_BATCH_GROUP     DBMS_SCHEDULER       TASK$_184314_6         48.18 27/11/25 22.04.46.781 27/11/25 22.05.33.956
       815 GFC_BATCH_USER           799   25589   738019 HIGH_BATCH_GROUP     DBMS_SCHEDULER       TASK$_184314_10        44.17 27/11/25 22.04.46.781 27/11/25 22.05.29.955
       815 GFC_BATCH_USER           502   40742   738019 HIGH_BATCH_GROUP     DBMS_SCHEDULER       TASK$_184314_9         47.01 27/11/25 22.04.41.593 27/11/25 22.05.27.605