Tuesday, March 31, 2026

Making HTTP Requests from an Oracle Database

This blog is part of a series about aspects and features of Oracle 26 and Autonomous Transaction Processing Database.

I have created a training project that integrates Strava (an application that tracks athletes' activities – in my case, cycling) with an Oracle Autonomous database that then performs spatial data processing and sends the results back to Strava.  The Oracle database calls the Strava APIs in HTTP requests that either extract data from or send it back to Strava.  These calls are made with the UTL_HTTP package.

All web services must be secured.  Only HTTPS is permitted when using an Autonomous Database on a public endpoint.  The only allowed port is 443 when the Autonomous AI Database instance is on a public endpoint. 
External calls are made with the UTL_HTTP package.  It needs a certificate wallet.  Autonomous AI Database instance is preconfigured with an Oracle Wallet that contains more than 90 of the most commonly trusted root and intermediate SSL certificates.  

Access Control Lists (ACLs)

Before Oracle can call any external system, permission must be granted by creating an ACL.  The Strava APIs are all below https://www.strava.com/api/v3/.  The following script creates a new ACL with connect and http privileges for the STRAVA user in my database that runs my application, and assigns the privileges to www.strava.com.  On Autonomous Database, this script should be run by the ADMIN user.

BEGIN
  DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
    host => 'www.strava.com',
    upper_port => 443,
    lower_port => 443,
    ace  => xs$ace_type(
              privilege_list => xs$name_list('connect','http'),
              principal_name => 'STRAVA',
              principal_type => xs_acl.ptype_db));
END;
/
If there is no ACL, I will get an error when I try to make the HTTP call.
ORA-29273: HTTP request failed
ORA-24247: network access denied by access control list (ACL)

HTTP Redirects

Sometimes an HTTP request is redirected to another URL, which may be in another domain.  An ACL is needed to cover each redirection of the URL.  Otherwise, the HTTP request will fail.  
I have particularly encountered this when downloading a GeoJSON resource directly into PL/SQL, where the published URL redirects to the actual location.  Errors ORA-29273 and ORA-24247 are raised even though an ACL grants access to the published URL.  Either additional ACLs are required, and/or more widely defined ACLs are needed.
By default, UTL_HTTP will follow up to 3 redirections, but the maximum number of redirects can be limited to any value or disabled by setting the maximum to 0
  IF p_redirect >= 0 THEN --restrict http redirect - mainly for debug
    UTL_HTTP.set_follow_redirect(l_req, p_redirect);
  END IF;

The easiest way I have found to determine all the redirections is to start by disabling redirection, by setting the maximum number of redirects to 0, and then look at the body returned from the HTTP request for the redirected URL.  Add the new ACL for this address, increment the number of redirections in SET_FOLLOW_REDIRECT, and repeat the HTTP request.  Repeat this process until the full request returns the requested item.

Take, for example, downloading county boundaries from the Irish Government's Open Data Unit at https://data.gov.ie/dataset/counties-national-statutory-boundaries-20191.
BEGIN
  DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
    host => '*.arcgis.com',
    ace  => xs$ace_type(
              privilege_list => xs$name_list('connect','http'),
              principal_name => 'STRAVA',
              principal_type => xs_acl.ptype_db));
END;
/

HTTP Requests

To extract information from Strava, I make a GET request to one of the Strava API endpoints.  The response is a JSON message in the body.  That is read in chunks of less than 32K into a CLOB variable.
Updates to Strava are made through PUT requests to the Strava API.  The information to be updated is put into the HTTP header.

REM strava_http.sql
…  
  l_req := UTL_HTTP.begin_request(p_url, p_req_type, 'HTTP/1.1');
  UTL_HTTP.set_header(l_req, 'Authorization', 'Bearer ' || g_access_token);
  utl_http.set_header(l_req, 'Accept-Charset', 'UTF-8');
  
  IF p_req_type = 'PUT' THEN
    UTL_HTTP.set_header(l_req, 'Content-Type', 'application/x-www-form-urlencoded');
    IF p_put_body IS NOT NULL THEN   -- Body
      l_header_body := escape_form_value(p_put_body);
      UTL_HTTP.set_header(l_req, 'Content-Length', LENGTH(l_header_body));
      UTL_HTTP.write_text(l_req, l_header_body);
    END IF;
  END IF;

  l_resp := UTL_HTTP.get_response(l_req);
…
  IF l_resp.status_code = 200 THEN
    NULL; --ok
  ELSIF l_resp.status_code = 401 THEN
    RAISE_APPLICATION_ERROR(-20401,'HTTP 401:Unauthorized');
…
  END IF;

  DBMS_LOB.createtemporary(l_clob, TRUE);
  LOOP
    DECLARE 
      l_buf VARCHAR2(32767);
    BEGIN
      UTL_HTTP.read_text(l_resp, l_buf, 32767);
    EXCEPTION WHEN UTL_HTTP.end_of_body THEN EXIT; 
    END;
  END LOOP;
    
  UTL_HTTP.end_response(l_resp);
…

Reading Headers

It is possible to access the HTTP response header much as you would an array.
/*list all headers*/
  FOR i IN 1 .. UTL_HTTP.get_header_count(l_resp) LOOP
    UTL_HTTP.get_header(l_resp, i, l_header_name, l_header_value);
    DBMS_OUTPUT.put_line(i ||':'|| l_header_name || ':' || l_header_value);
  END LOOP;
Strava returns a lot of information in the header or the HTTP response.  
  • The status of the HTTP request can be obtained from the response structure, but it is also recorded in the header.
  • Strava imposes usage limits.  By default, I can make 100 read calls within 15 minutes, and 1000 in a day.  The current usage counts and limits are reported in the header every time the Strava API is called.  My application tracks them to stop some processes from making too many requests.  
  • Other items are added by AWS (Strava's host).  x-Amz-Cf-Pop:LHR86-P2 indicates that my request to Strava was served by a CloudFront edge server in London (Heathrow area), specifically node 86, partition P2.
GET https://www.strava.com/api/v3/gear/b993101
1:Content-Type:application/json; charset=utf-8
2:Transfer-Encoding:chunked
3:Connection:close
4:Date:Thu, 26 Mar 2026 20:58:11 GMT
5:x-envoy-upstream-service-time:5779
6:server:istio-envoy
7:status:200 OK
8:x-ratelimit-usage:1,901
9:x-ratelimit-limit:200,2000
10:cache-control:max-age=0, private, must-revalidate
11:vary:Origin
12:referrer-policy:strict-origin-when-cross-origin
13:x-permitted-cross-domain-policies:none
14:x-xss-protection:1; mode=block
15:x-request-id:4a357a89-2935-4f41-9fa8-df0f112b804d
16:x-readratelimit-limit:100,1000
17:x-download-options:noopen
18:etag:W/"3c69eb05224d9014b96fb818f43215d7"
19:x-frame-options:DENY
20:x-readratelimit-usage:1,901
21:x-content-type-options:nosniff
22:X-Cache:Miss from cloudfront
23:Via:1.1 933ed3357b8f85661e4d84ebef8a63a8.cloudfront.net (CloudFront)
24:X-Amz-Cf-Pop:LHR86-P2
25:X-Amz-Cf-Id:_dx6YiTmtNagUoazFjSSXzM4nvFG2NEp_vKm0O2kJWmmAYYX4Z_EHg==
Specific named header values can be read directly without looping through the entire header.  I use this to read the usage counts and limits.
e_http_request_failed EXCEPTION;
PRAGMA EXCEPTION_INIT(e_http_request_failed,-29273);
…
-- Read usage limit headers
    BEGIN
      UTL_HTTP.get_header_by_name(l_resp, 'x-readratelimit-limit', l_header_value);
      IF l_header_value IS NOT NULL THEN
        g_short_read_limit := REGEXP_SUBSTR(l_header_value, '[^,]+', 1, 1);
        g_long_read_limit  := REGEXP_SUBSTR(l_header_value, '[^,]+', 1, 2);
      END IF;
    EXCEPTION WHEN e_http_request_failed THEN NULL;
    END;
The application logs and reports these limits.
API Log:15-min read usage: 1/100, 15-min all usage: 1/200, daily read usage: 901/1000, daily all usage: 901/2000

Thursday, March 26, 2026

Loading and Processing JSON with PL/SQL

This blog is part of a series about aspects and features of Oracle 26 and Autonomous Transaction Processing Database.

"JSON (JavaScript Object Notation) is a text-based format for storing and exchanging data in a way that’s both human-readable and machine-parsable. … it has grown into a very capable data format that simplifies data interchange across diverse platforms and programming languages."

I have created a demo project that integrates Strava (an application that tracks athletes' activities – in my case, cycling) with an Oracle Autonomous Database, that then performs some spatial data processing.  The Strava APIs all return data in JSON.  My application, written in PL/SQL, reads and processes those messages.

Mostly, I want to hold that data in a regular database table, structured conventionally in reasonably named columns.  In different places, I have loaded that JSON data in different ways, depending on requirements and circumstances.  There are three options to choose from:
  1. Directly through a JSON Duality View
  2. Convert each name-value pair in explicit code
  3. Extract Values from JSON in Virtual Columns

Directly through a JSON Duality View

Oracle introduced JSON Relational Duality in Oracle 23ai.  A JSON duality view is a mapping between table data and JSON documents.  It is possible to extract data from a table as a JSON document simply by querying a duality view based on that table.  It is also possible to insert data into the table through duality views.

For example, Strava tracks my gear (the bike I ride, or the shoes I wear).  I can extract details of each item with the Strava API, and I get a simple JSON document in return.  This is what I get for one of my bikes.
{
  "id" : "b4922223",
  "primary" : false,
  "name" : "Saracen",
  "nickname" : "Saracen",
  "resource_state" : 3,
  "retired" : false,
  "distance" : 1321925,
  "converted_distance" : 1321.9,
  "brand_name" : null,
  "model_name" : null,
  "frame_type" : 3,
  "description" : "",
  "weight" : 14
}
I want to import that into a table in my database that corresponds to that document.
CREATE TABLE gear 
(gear_id            VARCHAR2(20) NOT NULL
,primary            BOOLEAN
,name               VARCHAR2(60) 
,nickname           VARCHAR2(60) 
,resource_state     INTEGER
,retired            BOOLEAN
,distance_m         INTEGER      
,distance_km        NUMBER       
,brand_name         VARCHAR2(60) 
,model_name         VARCHAR2(60)
--frame_type
,description        CLOB
,weight             NUMBER       
,last_updated       TIMESTAMP DEFAULT SYSTIMESTAMP
,CONSTRAINT gear_pk PRIMARY KEY(gear_id)
);
I can use a JSON duality view to make the JSON document correspond to my table structure, rather than code it explicitly.
My table uses the column GEAR_ID as the primary key, but the Strava JSON document just has an ID. A JSON duality view must have a key value called '_id'.  I will map that to the primary key column in this table.  
All the other name-value pairs each map to the corresponding column in the table.  However, I am not bothering to import 'frame type'
CREATE OR REPLACE JSON RELATIONAL DUALITY VIEW gear_dv AS
SELECT JSON {'_id'    : g.gear_id
,'primary'            : g.primary
,'name'               : g.name
,'nickname'           : g.nickname
,'resource_state'     : g.resource_state
,'retired'            : g.retired
,'distance'           : g.distance_m
,'converted_distance' : g.distance_km
,'brand_name'         : g.brand_name
,'model_name'         : g.model_name
--frame_type
,'description'        : g.description
,'weight'             : g.weight
}
FROM gear g
WITH INSERT UPDATE
/
The JSON document arrives as an HTTP response and is held in a CLOB variable.  That has to be parsed into a JSON object with JSON_OBJECT_T.PARSE(). Then I either update an existing record or insert a new one.  In either case, that is done via the duality view.  Note that
  • The Strava id name is updated to _id to match the duality view, and it has to be _id.  
  • I have removed the frame_type and notification_distance name-value pairs.
  j_obj := JSON_OBJECT_T.parse(l_clob);
  l_id  := j_obj.get_string('id');

  BEGIN
    SELECT * INTO r_gear FROM gear WHERE gear_id = p_gear_id FOR UPDATE;
  EXCEPTION
    WHEN no_data_found THEN null;
  END;
  
  IF r_gear.gear_id = p_gear_id THEN
    UPDATE gear_dv d
    SET    d.data = JSON_TRANSFORM
           (value
           ,RENAME '$.id' = '_id'
           ,REMOVE '$.frame_type'
           ,REMOVE '$.notification_distance'
           )
    FROM JSON_TABLE(
           l_clob,
           '$[*]'
           COLUMNS (
             value CLOB FORMAT JSON PATH '$'
           ))
    WHERE d.data."_id" = l_id;
  ELSE
    INSERT INTO gear_dv
    SELECT JSON_TRANSFORM
           (value
           ,RENAME '$.id' = '_id'
           ,REMOVE '$.frame_type'
           ,REMOVE '$.notification_distance'
           )
    FROM JSON_TABLE(
           l_clob,
           '$[*]'
           COLUMNS (
             value CLOB FORMAT JSON PATH '$'
           )
     );
  END IF;
This approach works well where the JSON document structure closely matches the database table structure, and where I don't have to convert the inbound data with any function.  
However, if, for example, the name of the gear had to be upper case, I might put that into the definition of the duality view, thus
CREATE OR REPLACE JSON RELATIONAL DUALITY VIEW gear_dv AS
SELECT JSON {'_id'    : g.gear_id
,'name'               : UPPER(g.name)
…
}
FROM gear g WITH INSERT UPDATE
/
But then I would not be able to insert name via the duality view.  I wouldn't get an error, but it simply wouldn't process the name column.   It would be null after the insert, or would not be updated.
While the duality view is a very elegant way to map the data, it has limitations.  As soon as you need to transform data during the import, you probably have to go back to coding the mapping for each name-value pair.

See also 

Convert each name-value pair

The more conventional approach is to copy each name-value pair to a column using one of the get functions, sometimes passing the value through a function, and possibly with logic to determine whether to copy the data.
Here, I have used a row type variable and selected the whole current row from the database before updating data values 
BEGIN
    SELECT * INTO r_activities 
    FROM   activities 
    WHERE  activity_id = p_activity_id 
    FOR UPDATE;
  EXCEPTION
    WHEN no_data_found THEN r_activities.activity_id := p_activity_id;
  END;
…
  j_obj := JSON_OBJECT_T.parse(l_clob);

  r_activities.activity_id       := j_obj.get_number('id');
  r_activities.athlete_id        := j_obj.get_object('athlete').get_number('id');
  r_activities.start_date_utc    := iso8601_utc(j_obj.get_string('start_date'));
  r_activities.start_date_local  := iso8601_tz(j_obj.get_string('start_date_local'), j_obj.get_string('timezone'));
…
  r_activities.distance_km       := j_obj.get_number('distance')/1000;

  r_activities.gear_id           := j_obj.get_string('gear_id');
  IF r_activities.type IN('Ride','Walk','Hike','VirtualRide','Run') THEN
    j_subobj                     := j_obj.get_object('gear');
    IF j_subobj IS NOT NULL THEN
      r_activities.gear_name     := j_subobj.get_string('name');
    END IF;
  END IF;
…
  r_activities.photo_count       := j_obj.get_object('photos').get_number('count');
…
Then the entire row can be inserted or updated at the end from the row-type variable.
  BEGIN  
    INSERT INTO activities VALUES p_activities;
    dbms_output.put_line(sql%rowcount||' activity inserted');
    COMMIT;

  EXCEPTION 
    WHEN DUP_VAL_ON_INDEX THEN
      UPDATE activities
      SET ROW = p_activities
      WHERE  activity_id = p_activities.activity_id;
      dbms_output.put_line(sql%rowcount||' activity updated');   
      COMMIT;
  END;

Extract Values from JSON in Virtual Columns

The other option is to store the JSON in a CLOB column in the database and convert it on demand via virtual columns.  Whenever I log a new Strava activity or update, or delete an existing activity, I have subscribed to receive a message from Strava. That message is received by the database using a REST service.  
The message from Strava just tells me that an activity has been created, updated or deleted.  Then I have to process it.  Sometimes, I get multiple messages for the same activity in quick succession.
{
    "aspect_type": "update",
    "event_time": 1516126040,
    "object_id": 1360128428,
    "object_type": "activity",
    "owner_id": 134815,
    "subscription_id": 120475,
    "updates": {
        "title": "Messy"
    }
}
Strava requires that the REST service respond within 2 seconds, so any processing in it must be kept light.  I want to avoid:
  • spending time converting the JSON data while the REST service is running,
  • any malformed or unexpected variation in JSON causing an error in the REST service,
  • concurrent processing of different requests relating to the same activity causing one REST service handler to block another.  
Therefore, my REST service just stores the JSON in a CLOB column on a table and then triggers a scheduler job to process the message.  The subsequent processing needs to access the name-values in the JSON, so I have created virtual columns on the queue table that will only be evaluated on demand.
CREATE TABLE webhook_events
(ID                NUMBER GENERATED ALWAYS AS IDENTITY
,PAYLOAD           CLOB
,processing_status NUMBER DEFAULT 0 NOT NULL
…
,CONSTRAINT webhook_events_pk PRIMARY KEY (id)
);

ALTER TABLE webhook_events ADD aspect_type      GENERATED ALWAYS AS (JSON_VALUE(payload, '$."aspect_type"')) VIRTUAL;
ALTER TABLE webhook_events ADD object_type      GENERATED ALWAYS AS (JSON_VALUE(payload, '$."object_type"')) VIRTUAL;
ALTER TABLE webhook_events ADD object_id NUMBER GENERATED ALWAYS AS (JSON_VALUE(payload, '$."object_id"'  )) VIRTUAL;
In Strava, times are held in Unix 'Epoch Time' (the number of non-leap seconds since midnight UTC on 1st Jan 1970).  I have created a deterministic PL/SQL function to convert it to an Oracle timestamp and have referenced it in my virtual column definition.  
One virtual column cannot reference another.  So, I could not reference the virtual column EVENT_TIME in another virtual column EVENT_TIMESTAMP.  Instead, I had to reference the event_time name-value pair in both column definitions.
CREATE OR REPLACE FUNCTION strava.epoch_to_tstz 
(p_epoch_seconds IN NUMBER
) RETURN TIMESTAMP DETERMINISTIC IS
BEGIN
  RETURN TO_TIMESTAMP_TZ('1970-01-01 00:00:00 UTC', 'YYYY-MM-DD HH24:MI:SS TZR') 
       + NUMTODSINTERVAL(p_epoch_seconds, 'SECOND');
END epoch_to_tstz;
/

ALTER TABLE webhook_events ADD event_time NUMBER 
   GENERATED ALWAYS AS (JSON_VALUE(payload, '$."event_time"' )) VIRTUAL;
ALTER TABLE webhook_events ADD event_timestamp TIMESTAMP WITH TIME ZONE 
   GENERATED ALWAYS AS (epoch_to_tstz(JSON_VALUE(payload, '$."event_time"'))) VIRTUAL;
…
If the message is an update, it contains a JSON object listing the updated items and their new values.  The column updates contains this JSON document.
ALTER TABLE webhook_events ADD updates GENERATED ALWAYS AS (JSON_QUERY(payload, '$."updates"' )) VIRTUAL;
I can now reference the virtual columns in SQL in the queue handler without converting and storing the values in regular columns.
…
  FOR i IN ( --interate requests
    SELECT h.*, a.activity_id
    FROM webhook_events h
      LEFT OUTER JOIN activities a ON a.activity_id = h.object_id 
    WHERE h.processing_status = 0
    AND   h.object_type = 'activity'
    ORDER BY h.id
    FOR UPDATE OF h.processing_status 
  ) LOOP
…

Wednesday, March 25, 2026

Oracle 23ai/26ai: The New RETURNING Clause for the MERGE Statement

This blog is part of a series about aspects and features of Oracle 26 and Autonomous Database. 

The SQL MERGE statement was introduced in Oracle version 9i, allowing what is sometimes called UPSERT logic: a single SQL statement that conditionally inserts or updates rows.  However, one limitation remained.  Unlike INSERT, UPDATE, and DELETE, the MERGE statement did not support the RETURNING clause.  Oracle 23ai/26ai removes this restriction. Developers can now use the RETURNING clause directly in MERGE statements to retrieve values of affected rows. 

The Problem Before Oracle 23

Before Oracle 23, I would have to code a query loop capturing the values that were going to be updated and then update them in separate statements within the loop with additional exception handling as required.
…
  l_rows_processed := FALSE;

  FOR s IN (
    SELECT a.activity_id
    ,      listagg(DISTINCT ma.name,', ') WITHIN GROUP (ORDER BY ma.area_level, ma.name) area_list
    FROM   activities a
      INNER JOIN activity_areas aa ON a.activity_id = aa.activity_id
      INNER JOIN my_areas ma ON ma.area_code = aa.area_code and ma.area_number = aa.area_number
    WHERE a.activity_id = p_activity_id
    AND a.processing_status = 4
    And ma.matchable = 1
    GROUP BY a.activity_id;
  ) LOOP
    l_rows_processed := TRUE;

    UPDATE activities u
    SET    u.area_list = s.area_list
    WHERE  u.activity_id = s.activity_id

    update_activity_description(l_new_area_list,l_description);
  END LOOP;

  IF NOT l_rows_processed THEN 
    RAISE e_activity_not_found;
  END IF;
…

New Syntax in Oracle 23/26

Alternatively, I can use the MERGE statement to write a single SQL statement to generate the new value for a column and then update it in one go.  Now, the return clause also captures that new value in a variable that can be passed to another procedure.
MERGE INTO activities u
  USING (
    SELECT a.activity_id
    ,      listagg(DISTINCT ma.name,', ') WITHIN GROUP (ORDER BY ma.area_level, ma.name) area_list
    FROM   activities a
      INNER JOIN activity_areas aa on a.activity_id = aa.activity_id
      INNER JOIN my_areas ma on ma.area_code = aa.area_code and ma.area_number = aa.area_number
    WHERE a.activity_id = p_activity_id
    AND a.processing_status = 4
    AND ma.matchable = 1
    GROUP BY a.activity_id
  ) S 
  ON (s.activity_id = u.activity_id)
  WHEN MATCHED THEN UPDATE 
  SET u.area_list = s.area_list
  RETURNING new area_list INTO l_new_area_list; --new in Oracle 23
  
  IF SQL%ROWCOUNT = 0 THEN 
    RAISE e_activity_not_found;
  ELSE
    update_activity_description(l_new_area_list,l_description);
  END IF;

The benefits are
  • Less and simpler code, which ought therefore to be easier to test and maintain, requiring less additional logic and exception handling.
  • Fewer SQL statements and therefore fewer context switches between PL/SQL and SQL.
Just like the return clause on UPDATE and DELETE, it is also possible to
  • Reference new and/or old column values,
  • Single values into a scalar (single value) variable
  • Bulk collect multiple rows into an array variable
  • Aggregate multiple rows into a scalar variable
I am far from the first to blog about this feature, but it deserves to be better known.

See also:

Tuesday, March 24, 2026

ChatGPT & Oracle Development

This blog is part of a series about aspects and features of Oracle 26 and Autonomous Database.

TL;DR

This is an opinion piece about the impact of AI on developers and administrators.  I'll tell you my opinion here at the start:  

AI won't be replacing us, at least not yet, but we may be replaced by someone who is more productive because they are using AI!  I certainly advocate using it, but do so thoughtfully.  Consider whether the answers are sensible, and then test them carefully.

Introduction

To learn more about Oracle 26ai Autonomous Database, I returned to a project I created in 2021 to explore spatial data.  I had exported my activity data from Strava as flat files and then imported them into an Oracle database.  

Now, I have migrated that project to an Autonomous database on OCI and integrated it directly with Strava through their API.  Notifications of new activities are received via a REST service, some processing is done in the Oracle database, and results are written back to the Strava activity description.  All quite simple, but it made me use techniques and technologies that I have never used before.

ChatGPT

When I created the original project in 2021, I had the Oracle documentation and Google.  I had to design and write every bit of code myself.  It all took time.

Now, I have been able to use ChatGPT (other AI Chatbots are available, but this is where I started), and the effect has been remarkable.  I have been pointed at features and techniques that are new to me, and often I have been given a concrete example to start work on, and therefore I have learned about them.

I asked ChatGPT questions in plain language about the details of both Oracle 26 and the Strava API, and it gave me sensible answers in plain language that were generally sensible.  In some cases, it designed complete processing flows; sometimes it just illustrated the answer with code examples.  I could ask follow-up questions, and it would answer them in the context of the earlier question, refining its response.  It became a genuine conversation.  Though it is not going to pass the Turing Test!

ChatGPT's answers were mostly accurate, though some of its generated code was not always completely correct.  On some subjects, such as character set, we went round in circles.  Sometimes, I would point out mistakes, and it would say 'Yes, you are right!' or 'Well spotted!'.  I am not convinced it learnt anything from that.  Over time, I learnt that I needed to ask quite precise questions, otherwise it would go off in other directions.  Nevertheless, I found I got very quickly from a first draft of code to debugging almost working code.  I have no doubt that using ChatGPT increased my productivity.  If I had to quantify the effect, I would estimate that it improved my productivity by a factor of about 3.

These are some of my early questions to ChatGPT:

  • "How can Strava notify my Oracle database, using only PL/SQL, that an activity has been added, deleted or updated?" 
    • The result included a complete design for creating a Strava webhook to send an HTTP message to a REST service, including a database data model design and how to process it by calling the Strava API to extract the activity data

  • "How would I load GeoJSON … into an Oracle spatial data object geometry in an Oracle autonomous database using just PL/SQL"
    • I got a complete PL/SQL procedure to extract the GeoJSON from the data.gov.ie website, and then how to read the GeoJSON into an Oracle spatial geometry.
      • I was able to ask follow-up questions.   When one particular public data set produced errors from sdo_util.from_geojson, after a few other suggestions, ChatGPT provided a complete alternative PL/SQL procedure to create a spatial geometry from just the array of coordinates.  It is slower, but it works reliably.  I use it as an alternative when I get an error from the Oracle function.

There were some notable examples of code that ChatGPT produced correctly the first time, and much faster than I could have.  In particular, extracting all the data in a Strava activity (see strava_http.get_activity_stream) as both an Oracle spatial geometry and a GPX file, including heart monitor, cadence and power meter data if also present (that must conform to the Topographix and Garmin XML schemas).  My code is on GitHub, so you can judge the result for yourself!

Nullius in Verba

This motto (it can be translated as "Take Nobody's Word for It!") is at the heart of the scientific principle.  It can usefully be applied to many things, and certainly to ChatGPT.  

ChatGPT is a hugely powerful tool that seems to be capable of answering any reasonable query.  I would encourage anyone to use it to help develop code faster.  However, every response should be treated with healthy scepticism and be tested carefully.   Whether code compiles and executes is a straightforward question with an essentially binary answer.  Whether that code then does what it is supposed to do requires thorough testing, but then so does human-written code!

Nonetheless, I am hugely impressed by ChatGPT.  I have no doubt that I got further and got there much faster than I ever would otherwise! 

Monday, March 23, 2026

Job Classes on Autonomous Database

This blog is part of a series about aspects and features of Oracle 26 and Autonomous Database.

I have written about using Job Classes with the database scheduler.  It is essentially the same on Autonomous database, but some configuration is delivered by Oracle.  You may choose to use it directly as delivered.  However, I suggest using it as the basis for a custom configuration.

The Autonomous Transaction Processing (ATP) database is delivered with 5 consumer groups and 5 corresponding job classes that map to them.  
OWNER JOB_CLASS_NAME RESOURCE_CONSUMER_GROUP SERVICE
----- -------------- ----------------------- ------------------------------------------------------
LOGGING_LEVEL LOG_HISTORY COMMENTS                                
------------- ----------- ----------------------------------------
SYS    TPURGENT      TPURGENT                GE***********09_GOFASTER1_tpurgent.adb.oraclecloud.com 
RUNS                      Urgent transaction processing jobs     

SYS    TP            TP                      GE***********09_GOFASTER1_tp.adb.oraclecloud.com     
RUNS                      Transaction processing jobs            

SYS    HIGH          HIGH                    GE***********09_GOFASTER1_high.adb.oraclecloud.com   
RUNS                      High priority jobs                     

SYS    MEDIUM        MEDIUM                  GE***********09_GOFASTER1_medium.adb.oraclecloud.com 
RUNS                      Medium priority jobs                   

SYS    LOW           LOW                     GE***********09_GOFASTER1_low.adb.oraclecloud.com    
RUNS                      Low priority jobs                      
It is easy and perfectly reasonable to allocate these delivered job classes to scheduler jobs.  However, these job classes cannot be changed, even by the ADMIN user.
BEGIN dbms_Scheduler.set_attribute('SYS.TPURGENT', 'comments', 'A Comment'); END;
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_ISCHED", line 3513
ORA-06512: at "SYS.DBMS_SCHEDULER", line 3460
ORA-06512: at line 1

Note that the service names are different and unique to every autonomous database.  I have been careful to avoid hard-coding this anywhere within my scripts and code.  Instead, I duplicate the delivered job classes and then alter as necessary.  Thus, each job or group of jobs has its own job class.  I prefer to manage the job and the job scheduler, as far as possible, within a packaged procedure.  This has several advantages.

  • The right version is always available because it has been installed into the database and can be migrated like any other version-controlled source code.  This also covers when a database has been cloned, restored or flashed back.  This saves looking for the right version of the right script.  
  • Jobs can be created and managed by a user who does not have access to manage the job scheduler, but who can execute procedures in the package.
  • I have created a procedure to clone a job class and adjust attributes as required.
  • The code is available on GitHub.
CREATE OR REPLACE PACKAGE BODY strava.strava_job AS
...
e_job_already_exists EXCEPTION;
PRAGMA EXCEPTION_INIT(e_job_already_exists,-27477);
...
PROCEDURE create_job_class
(p_job_class_name          all_scheduler_job_classes.job_class_name%TYPE
,p_based_on_job_class      all_scheduler_job_classes.job_class_name%TYPE
,p_resource_consumer_group all_scheduler_job_classes.resource_consumer_group%TYPE DEFAULT NULL
,p_service                 all_scheduler_job_classes.service%TYPE                 DEFAULT NULL
,p_logging_level           all_scheduler_job_classes.logging_level%TYPE           DEFAULT NULL
,p_log_history             all_scheduler_job_classes.log_history%TYPE             DEFAULT NULL
,p_comments                all_scheduler_job_classes.comments%TYPE                DEFAULT NULL)
IS
  r_job_class all_scheduler_job_classes%ROWTYPE;
...
BEGIN
...  
  SELECT * INTO r_job_class FROM all_scheduler_job_classes
  WHERE owner = 'SYS' AND job_class_name = p_based_on_job_class;
  
  BEGIN
    DBMS_SCHEDULER.CREATE_JOB_CLASS(p_job_class_name); 
  EXCEPTION WHEN e_job_already_exists THEN NULL;
  END;
  
  IF p_resource_consumer_group IS NOT NULL THEN r_job_class.resource_consumer_group := p_resource_consumer_group; END IF;
  IF p_service                 IS NOT NULL THEN r_job_class.service := p_service; END IF;
  IF p_logging_level           IS NOT NULL THEN r_job_class.logging_level := p_logging_level; END IF;
  IF p_log_history             IS NOT NULL THEN r_job_class.log_history := p_log_history; END IF;
  IF p_comments                IS NOT NULL THEN r_job_class.comments := p_comments; END IF;
  
  dbms_Scheduler.set_attribute(p_job_class_name, 'resource_consumer_group', r_job_class.resource_consumer_group);
  dbms_Scheduler.set_attribute(p_job_class_name, 'service'                , r_job_class.service);
  IF    r_job_class.logging_level = 'OFF'         THEN dbms_Scheduler.set_attribute(p_job_class_name, 'logging_level', DBMS_SCHEDULER.LOGGING_OFF);
  ELSIF r_job_class.logging_level = 'RUNS'        THEN dbms_Scheduler.set_attribute(p_job_class_name, 'logging_level', DBMS_SCHEDULER.LOGGING_RUNS);
  ELSIF r_job_class.logging_level = 'FAILED RUNS' THEN dbms_Scheduler.set_attribute(p_job_class_name, 'logging_level', DBMS_SCHEDULER.LOGGING_FAILED_RUNS);
  ELSIF r_job_class.logging_level = 'FULL'        THEN dbms_Scheduler.set_attribute(p_job_class_name, 'logging_level', DBMS_SCHEDULER.LOGGING_FULL);
  END IF;
  dbms_Scheduler.set_attribute(p_job_class_name, 'log_history'            , r_job_class.log_history);
  dbms_Scheduler.set_attribute(p_job_class_name, 'comments'               , r_job_class.comments);
...
EXCEPTION 
  WHEN no_data_found THEN
...
    RAISE;
END create_job_class;
This new procedure is called from the procedures that create jobs.  In the example below, the LOW job class is cloned into a new PURGE_API_LOG_CLASS that is used by the PURGE_API_LOG job.  I have set the log history retention to 7 days, but all other settings remain the same.  
PROCEDURE create_purge_api_log_job
IS
  k_job_name  CONSTANT VARCHAR2(128 CHAR) := 'STRAVA.PURGE_API_LOG';
  k_job_class CONSTANT VARCHAR2(128 CHAR) :=    'SYS.PURGE_API_LOG_CLASS';
BEGIN
...
  create_job_class(k_job_class,'LOW', p_log_history=>7);
  BEGIN
    dbms_scheduler.create_job(
    (job_name => k_job_name
    ,job_type => 'STORED_PROCEDURE'
    ,job_action => 'STRAVA.STRAVA_HTTP.PURGE_API_LOG'
    ,enabled => FALSE
    );
  EXCEPTION WHEN e_job_already_exists THEN NULL;
  END;
...
  dbms_scheduler.set_attribute(name => k_job_name, attribute => 'JOB_CLASS', value => k_job_class);
...
  dbms_scheduler.enable(name => k_job_name);
...
END create_purge_api_log_job;
...
END strava_job;
/

Now I have several job classes 

OWNER JOB_CLASS_NAME                        RESOURCE_CON SERVICE
----- ------------------------------------- ------------ ------------------------------------------------------
LOGGING_LEVEL LOG_HISTORY COMMENTS                      
------------- ----------- ------------------------------
SYS  CREATE_ACTIVITY_HSEARCH_UPD_ALL_CLASS  LOW          GE***********09_GOFASTER1_low.adb.oraclecloud.com    
RUNS                   7 Low priority jobs            

SYS  ACTIVITY_AREA_LIST_UPD_ALL_CLASS        LOW         GE***********09_GOFASTER1_low.adb.oraclecloud.com    
RUNS                   7 Low priority jobs            

SYS  PURGE_API_LOG_CLASS                     LOW         GE***********09_GOFASTER1_low.adb.oraclecloud.com    
RUNS                   7 Low priority jobs            

SYS  PURGE_EVENT_QUEUE_CLASS                 LOW         GE***********09_GOFASTER1_low.adb.oraclecloud.com    
RUNS                   7 Low priority jobs            

SYS  BATCH_LOAD_ACTIVITIES_CLASS             MEDIUM      GE***********09_GOFASTER1_medium.adb.oraclecloud.com 
RUNS                   7 Medium priority jobs         

SYS  UPDATE_STRAVA_ACTIVTY_CLASS             MEDIUM      GE***********09_GOFASTER1_medium.adb.oraclecloud.com 
RUNS                   7 Medium priority jobs         

SYS  PROCESS_WEBHOOK_QUEUE_CLASS             MEDIUM      GE***********09_GOFASTER1_medium.adb.oraclecloud.com 
RUNS                   7 Medium priority jobs         

SYS  RENEW_STRAVA_TOKENS_CLASS               HIGH        GE***********09_GOFASTER1_high.adb.oraclecloud.com   
RUNS                   7 High priority jobs
Each job has been allocated to a different job class.  In future, I can control the behaviour of each job by adjusting the job class.
OWNER  JOB_NAME                             JOB_CLASS                            
------ ------------------------------------ -------------------------------------
STRAVA ACTIVITY_AREA_LIST_UPD_ALL_JOB      ACTIVITY_AREA_LIST_UPD_ALL_CLASS 
STRAVA BATCH_LOAD_ACTIVITIES_JOB           BATCH_LOAD_ACTIVITIES_CLASS 
STRAVA CREATE_ACTIVITY_HSEARCH_UPD_ALL_JOB CREATE_ACTIVITY_HSEARCH_UPD_ALL_CLASS
STRAVA PROCESS_WEBHOOK_QUEUE_JOB           PROCESS_WEBHOOK_QUEUE_CLASS 
STRAVA PURGE_API_LOG                       PURGE_API_LOG_CLASS 
STRAVA PURGE_EVENT_QUEUE                   PURGE_EVENT_QUEUE_CLASS 
STRAVA RENEW_STRAVA_TOKENS_JOB             RENEW_STRAVA_TOKENS_CLASS 
STRAVA UPDATE_STRAVA_ACTIVTY_JOB           UPDATE_STRAVA_ACTIVTY_CLASS