Thursday, May 14, 2026

Data Pump Export to OCI Storage from Autonomous Database

This is the second of three blogs about using Data Pump on an Autonomous Oracle database.  

  1. General considerations and setup.
  2. Export
  3. Import
I have found places on the internet that say you cannot export directly from Autonomous Database to OCI Storage, and I struggled to find an example that worked.  However, the following example does work!

Export Job

This anonymous PL/SQL block creates a data pump export job.  Several elements must be set up.
  • A new data pump job needs to be created/opened. In this example, I am exporting the schema STRAVA.
  • Some data pump parameters are set.
  • The data pump export file location and name are set.  
    • I want to write the dump file directly to the OCI bucket storage, so I must specify the file type as DBMS_DATAPUMP.KU$_FILE_TYPE_URIDUMP_FILE.  On an autonomous database, this will be the prefix of the directory name that will contain the various pieces of the export.
    • DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE  can only be used to write to a database directory. 
  • The data pump log file destination and name are specified.  Again, DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE can only specify a database directory.  Therefore, it will be the DATA_PUMP_DIR database directory.
  • A metadata filter restricts the export to just objects in the STRAVA schema.
DECLARE
  h NUMBER;
  l_file_name VARCHAR2(100);
  l_urifile_name VARCHAR2(200);
  l_dir VARCHAR2(100) := 'DATA_PUMP_DIR';
BEGIN
  l_file_name := 'export_strava_%T';
  l_urifile_name := 'https://objectstorage.uk-london-1.oraclecloud.com/n/lr********ea/b/bucket-gofaster1/o/'||l_file_name;
  h := DBMS_DATAPUMP.OPEN
       (operation => 'EXPORT'
       ,job_mode => 'SCHEMA'
       );

  dbms_datapump.set_parameter
       (handle => h
       ,name   => 'COMPRESSION'
       ,value  => 'ALL'
       );

dbms_datapump.set_parameter (handle => h ,name => 'ESTIMATE' ,value => 'BLOCKS' );
DBMS_DATAPUMP.ADD_FILE (handle => h ,filename => l_urifile_name||'.dmp' ,directory => 'OBJECT_STORE_CRED' ,filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_URIDUMP_FILE ,reusefile => 1 );
DBMS_DATAPUMP.ADD_FILE (handle => h ,filename => l_file_name||'.log' ,directory => l_dir ,filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE ,reusefile => 1 ); DBMS_DATAPUMP.METADATA_FILTER(h, 'SCHEMA_EXPR', 'IN (''STRAVA'')'); DBMS_DATAPUMP.START_JOB(h); END; /

Things That Did Not Work

I have not been able to write the data pump log file to the OCI bucket storage.  That can only be written to a database directory.
  DBMS_DATAPUMP.ADD_FILE
       (handle    => h
       ,filename  => l_urifile_name||'.log'
       ,directory => 'OBJECT_STORE_CRED'
       ,filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE
       ,reusefile => 1
  );
Normally, I can specify a maximum file size for a data pump export file.  However, on Autonomous, the maximum file size setting was ignored.  I always get 24Mb chunks
  DBMS_DATAPUMP.ADD_FILE
       (handle    => h
       ,filename  => l_urifile_name||'.dmp'
       ,directory => 'OBJECT_STORE_CRED'
       ,filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_URIDUMP_FILE
       ,filesize  => '4G' --but this is ignored
       ,reusefile => 1
       );

How to See What Data Pump Jobs are Running?

The view DBA_DATAPUMP_JOBS reports on all data pump jobs, irrespective of their state.
select * from dba_datapump_jobs
/

OWNER_NAME JOB_NAME             OPERATION  JOB_MODE   STATE            DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
---------- -------------------- ---------- ---------- ------------ ---------- ----------------- -----------------
ADMIN     SYS_EXPORT_SCHEMA_01 EXPORT    SCHEMA    EXECUTING            1                 1                 3

DBA_DATAPUMP_SESSIONS reports on the database sessions currently attached to a data pump job.
select p.owner_name, p.job_name, p.inst_id, p.saddr, session_type, sid
from dba_datapump_sessions p
  left outer join gv$session s on s.inst_id = p.inst_id and s.saddr = p.saddr
;

OWNER_NAME JOB_NAME                INST_ID SADDR            SESSION_TYPE          SID
---------- -------------------- ---------- ---------------- -------------- ----------
ADMIN     SYS_EXPORT_SCHEMA_01          3 000000128EF28558 DBMS_DATAPUMP      22536
ADMIN     SYS_EXPORT_SCHEMA_01          3 0000001248491BF0 WORKER             31689
ADMIN     SYS_EXPORT_SCHEMA_01          3 00000012976F07F8 MASTER             33092

Wait for Running Data Pump Jobs

This PL/SQL block uses DBMS_DATAPUMP.WAIT_FOR_JOB to wait for a currently executing data pump job to complete.
DECLARE
  h NUMBER;
  l_job_name VARCHAR2(100);
  l_job_state VARCHAR2(100);
  e_job_does_not_exists EXCEPTION;
  PRAGMA EXCEPTION_INIT(e_job_does_not_exists,-31626s);
BEGIN
  FOR i IN (SELECT DISTINCT owner_name, job_name FROM dba_datapump_jobs) 
  LOOP
    BEGIN
      l_job_name := i.owner_name||'.'||i.job_name;
      -- Attach to running job by name
      h := DBMS_DATAPUMP.ATTACH(job_name => l_job_name);

      DBMS_DATAPUMP.WAIT_FOR_JOB(handle => h, job_state => l_job_state);
	  dbms_output.put_line('Datapump Job: '||l_job_name||', Status: '||l_job_state);
    
      DBMS_DATAPUMP.DETACH(h);
	EXCEPTION 
	  WHEN e_job_does_not_exists THEN 
	    dbms_output.put_line('Datapump Job: '||l_job_name||' does not exist');
	END;
  END LOOP;
END;
/

Spool Log File

I have not been able to write data pump log files directly to OCI storage, only to a database directory.  However, they are generally small.  I can read them and print them out with DBMS_OUTPUT.
SET SERVEROUTPUT ON SIZE UNLIMITED echo on;
DECLARE
    l_file   UTL_FILE.FILE_TYPE;
    l_line   VARCHAR2(32767);
    l_dir    CONSTANT VARCHAR2(30) := 'DATA_PUMP_DIR';
BEGIN
  FOR i IN (
    SELECT * FROM TABLE(DBMS_CLOUD.LIST_FILES('DATA_PUMP_DIR'))
	WHERE object_name like 'export_strava%.log'
  ) LOOP
    BEGIN
      -- Open the file for reading
      l_file := UTL_FILE.FOPEN(location => l_dir,
                               filename => i.object_name,
                               open_mode => 'R',
                               max_linesize => 32767);

      -- Loop through each line
      LOOP
        BEGIN
          UTL_FILE.GET_LINE(l_file, l_line);
          DBMS_OUTPUT.PUT_LINE(l_line);
        EXCEPTION
          WHEN NO_DATA_FOUND THEN
            EXIT;  -- exit loop at end of file
        END;
      END LOOP;
      -- Close the file
      UTL_FILE.FCLOSE(l_file);
    EXCEPTION
      WHEN UTL_FILE.INVALID_PATH THEN
        DBMS_OUTPUT.PUT_LINE('Invalid directory path: ' || l_dir);
      WHEN UTL_FILE.INVALID_MODE THEN
        DBMS_OUTPUT.PUT_LINE('Invalid mode for file: ' || i.object_name);
      WHEN UTL_FILE.READ_ERROR THEN
        DBMS_OUTPUT.PUT_LINE('Read error on file: ' || i.object_name);
      WHEN OTHERS THEN
        DBMS_OUTPUT.PUT_LINE('Unexpected error: ' || SQLERRM);
    END;
  END LOOP;

END;
/
The output is similar to data pump run anywhere else.
Starting "ADMIN"."SYS_EXPORT_SCHEMA_01":  
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
.  estimated "STRAVA"."MY_AREAS"                           950 MB
.  estimated "STRAVA"."ACTIVITIES"                       580.1 MB
.  estimated "STRAVA"."ACTIVITY_AREAS"                       5 MB
.  estimated "STRAVA"."WEBHOOK_EVENTS"                     640 KB
.  estimated "STRAVA"."GEAR"                               320 KB
.  estimated "STRAVA"."SCHEDULER$_JOB_ARG"                 320 KB
.  estimated "STRAVA"."API_LOG"                            128 KB
.  estimated "STRAVA"."MY_AREA_CODES"                      128 KB
.  estimated "STRAVA"."STRAVA_TOKENS"                       64 KB
.  estimated "STRAVA"."ACTIVITIES_EXT"                     4.7 KB
Total estimation using BLOCKS method: 1.5 GB
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ORACLE_OBJECT_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA/LOGREP
Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/TYPE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/IDENTITY_COLUMN
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCOBJ/SCHEDULER
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA/SCHEDULER
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA/LBAC_EXP
. . exported "STRAVA"."MY_AREAS"                         384.5 MB  117584 rows
. . exported "STRAVA"."ACTIVITIES"                       347.9 MB    5614 rows
. . exported "STRAVA"."ACTIVITY_AREAS"                     1.2 MB  134817 rows
. . exported "STRAVA"."WEBHOOK_EVENTS"                     9.9 KB      32 rows
. . exported "STRAVA"."GEAR"                               6.4 KB      12 rows
. . exported "STRAVA"."SCHEDULER$_JOB_ARG"                 5.4 KB       4 rows
. . exported "STRAVA"."API_LOG"                            5.6 KB       4 rows
. . exported "STRAVA"."MY_AREA_CODES"                      5.8 KB      58 rows
. . exported "STRAVA"."STRAVA_TOKENS"                      5.2 KB       1 rows
. . exported "STRAVA"."ACTIVITIES_EXT"                       0 KB       0 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "ADMIN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ADMIN.SYS_EXPORT_SCHEMA_01 is:
  https://objectstorage.uk-london-1.oraclecloud.com/n/lr********ea/b/bucket-gofaster1/o/export_strava_20260512.dmp
Job "ADMIN"."SYS_EXPORT_SCHEMA_01" successfully completed at Tue May 12 20:27:28 2026 elapsed 0 00:05:36

Move Log File to OCI

DBMS_CLOUD.PUT_OBJECT and DBMS_CLOUD.GET_OBJECT effectively copy to and from OCI storage.  I use DBMS_PUT_OBJECT to copy the data pump log files from the DATA_PUMP_DIR directory to my OCI bucket storage, then they can be deleted from the database directory with UTL_FILE.FREMOVE.
DECLARE 
  l_counter INTEGER := 0;
  l_filename VARCHAR2(100);
  l_dir VARCHAR2(100) := 'DATA_PUMP_DIR';
  l_uri VARCHAR2(200) := 'https://objectstorage.uk-london-1.oraclecloud.com/n/lr********ea/b/bucket-gofaster1/o/';

BEGIN
  FOR i IN (
    SELECT * FROM TABLE(DBMS_CLOUD.LIST_FILES('DATA_PUMP_DIR'))
    WHERE regexp_like(object_name,'export_strava.+\.(log|dmp)')
  ) LOOP
    l_counter := l_counter + 1;
    l_filename := REPLACE(i.object_name,'%T',TO_CHAR(i.created,'YYYYMMDD'));
    l_filename := REPLACE(l_filename,'%U',l_counter);
    dbms_output.put_line(l_filename);

DBMS_CLOUD.PUT_OBJECT (object_uri => l_uri||l_filename ,directory_name => l_dir ,file_name => i.object_name ,credential_name => 'OBJECT_STORE_CRED' );
UTL_FILE.FREMOVE('DATA_PUMP_DIR',i.object_name); END LOOP; END; / export_strava_20260512.log

Download Dump File Segments

The entire directory of dump files can be bulk downloaded with OCI CLI
oci os object bulk-download --bucket-name bucket-gofaster1 --download-dir C:\Users\david\OneDrive\Documents\strava\datapump --prefix export_strava_20260512.dmp_segments
Downloaded export_strava_20260512.dmp_segments/aaaaab  [####################################]  100%
Downloaded export_strava_20260512.dmp_segments/aaaaak  [####################################]  100%
Downloaded export_strava_20260512.dmp_segments/aaaaal  [####################################]  100%
...
Downloaded export_strava_20260512.dmp_segments/aaaabd [####################################] 100% Downloaded export_strava_20260512.dmp_segments/aaaaaz [####################################] 100% Downloaded export_strava_20260512.dmp_segments/aaaabf [####################################] 100% { "download-failures": {}, "downloaded-objects": [ "export_strava_20260512.dmp_segments/aaaaab", "export_strava_20260512.dmp_segments/aaaaak", "export_strava_20260512.dmp_segments/aaaaal", ...
"export_strava_20260512.dmp_segments/aaaabd", "export_strava_20260512.dmp_segments/aaaaaz", "export_strava_20260512.dmp_segments/aaaabf" ], "skipped-objects": [] }


Wednesday, May 13, 2026

Data Pump to/from OCI Storage in Autonomous Database: General Considerations and Setup

This is the first of three blogs about using Data Pump on an Autonomous Oracle database.  
  1. General considerations and setup.
  2. Export
  3. Import
Using Data Pump on an Autonomous Oracle database is a little different.  I have run into several limitations, so I'm sharing what I have learned.  Throughout the examples in these articles, I am using an Oracle 26 Autonomous Database on Oracle OCI Free Tier.

No Operating System Access

The first thing to bear in mind is that there is no operating system command-line access on the database server, so you cannot run the datapump clients expdp and impdp on the database server.  Instead, Data Pump jobs can be created through the PL/SQL API; dbms_datapump.  This will be demonstrated in the other two blog posts in this series.

Database Directory DATA_PUMP_DIR

On an autonomous database, there is no access to the underlying operating system or file system.  However, several database directories are created in the database.  Though write access is only granted to two of these directories.  In both cases, write access is granted to and can be granted on by the ADMIN and C##CLOUD$SERVICE users.
  • DATA_PUMP_DIR is provided for the use of Data Pump.  Write access is also granted to roles EXP_FULL_DATABASE, IMP_FULL_DATABASE, DATAPUMP_CLOUD_EXP and DATAPUMP_CLOUD_IMP. 
  • SQL_TCB_DIR is used by the SQL test case builder (see DBMS_SQLDIAG).  
However, these two directories point to database file systems.  
select directory_name, directory_path from dba_directories
where directory_name IN('DATA_PUMP_DIR','SQL_TCB_DIR')
order by directory_path
/

DIRECTORY_NAME  DIRECTORY_PATH                                              
--------------- ------------------------------------------------------------
DATA_PUMP_DIR  /u03/dbfs/4A03C84C700735C1E063D760000A3AE1/data/dpdump    
SQL_TCB_DIR    /u03/dbfs/4A03C84C700735C1E063D760000A3AE1/data/tcb_dir
Thus, their context exists within the database.  It is in a tablespace called DBFS_DATA, and it counts towards the total size of the container.
select tablespace_name, file_id, file_name, bytes, status
from dba_data_Files
where tablespace_name like 'DBFS%'
/
TABLESPACE_NAME    FILE_ID FILE_NAME                                               BYTES STATUS   
--------------- ---------- -------------------------------------------------- ---------- ---------
DBFS_DATA           10777 +DATA/FCEE691J/4A03C84C700735C1E063D760000A3AE1/DA 4399824896 AVAILABLE
                           TAFILE/dbfs_data.5025.1224350611
It is very easy to export to and import from these directories, but if you create a large dump file, the data files of the DBFS_DATA tablespace will extend and consume space in the database container.  In an autonomous database, no user has sufficient access to shrink the tablespace by conventional means.  
In the paid Autonomous Database, it can be released using OCI CLI (see shrink command reference).  This is not available in the free tier.

Setting Up OCI Command Line Interface

Create a pair of public and private keys in the command line interface.
oci setup keys

File C:\Users\david\.oci\oci_api_key_public.pem already exists, do you want to overwrite? [y/N]: N
File C:\Users\david\.oci\oci_api_key.pem already exists, do you want to overwrite? [y/N]: N
Public key fingerprint: 72:7a:3d:1b:66:65:cf:a1:ed:2a:3d:42:6f:03:53:0b
Then follow the instructions How to Upload the Public Key to upload the public key to the Tokens and Keys section of OCI

Take the information in the configuration file preview and paste it back into the OCI configuration file
This is in the same location as the public and private key files, in my case C:\Users\david\.oci\config.
Specify the fully qualified path and name of the private key file.
[DEFAULT]
user=ocid1.user.oc1..aaaaaaaaohj2suignixq7jjyrbkodaz5r73wonx25pzaspfpmkiyg76sra77
fingerprint=05:d4:fc:e2:81:9e:18:fb:00:d5:2b:9e:12:34:56:78
key_file=C:\Users\david\.oci\oci_api_key.pem
tenancy=ocid1.tenancy.oc1..aaaaaaaaf5e7vjqhg6m3jiv3zdos3b2hkbixqyq3tvqhtuyftyemzejzwosv
region=uk-london-1

Create Credential

Finally, create a credential with access to the OCI storage bucket that will hold the datapump files.
BEGIN
  DBMS_CLOUD.CREATE_CREDENTIAL (
      credential_name   => 'OBJECT_STORE_CRED',
      username          => 'OracleIdentityCloudService/dba@go-faster.co.uk',
      password          => ';I)<P+3U:GRi6mprPD1n'
  );
END;
/

Default Credential

I have found it convenient to specify a default credential.
ALTER DATABASE PROPERTY SET DEFAULT_CREDENTIAL = 'ADMIN.OBJECT_STORE_CRED';
For example, I can query the OCI Bucket without having to specify a credential to access the OCI Bucket storage every time.
SELECT object_name, bytes/1024/1024 Mb
FROM table(
  DBMS_CLOUD.LIST_objects
  (location_uri => 'https://objectstorage.uk-london-1.oraclecloud.com/n/lrp********a/b/bucket-gofaster1/o/'));

OBJECT_NAME                                                        MB
------------------------------------------------------------ --------
export_strava_20260506.dmp                                      0.000
export_strava_20260506.dmp_segments/aaaaaa                      2.000
export_strava_20260506.dmp_segments/aaaaab 24.000 ...
export_strava_20260506.dmp_segments/aaaabf 11.563 export_strava_20260506.log 0.004
The next blog describes how to create data pump jobs to export data.

Thursday, April 09, 2026

Consuming Inbound HTTP Requests with REST Services

This blog is part of a series about aspects and features of Oracle 26 and Autonomous Transaction Processing Database.

I have created a demo project that integrates Strava (an application that tracks athletes' activities – in my case, cycling) with an Oracle Autonomous database that then performs some spatial data processing and sends the results back to Strava.  
I have described how to call the Strava API in HTTP requests from the database to pull data from Strava or push it back.  To complete the integration, the application must receive and process inbound HTTP requests from Strava when activities are added, updated or deleted.  
The database has to handle two kinds of messages from Strava.  
  • I have to create a subscription in Strava to receive notifications of new activities.  Part of the authentication of that request includes responding promptly to an HTTP GET request.
  • Then, when I have created my subscription, I receive HTTP POST requests to tell me that an activity has been created, updated or deleted in Strava that I then process.
I have created PL/SQL packaged procedures and REST services for each of these requests.

ORDS URL

Oracle REST Data Services (ORDS) are configured by default for Autonomous Database.  You can find the public access URL for the database on the OCI console under Tool Configuration

Enabling REST

All my code exists within a database schema called STRAVA.  Access to ORDS must be granted to this schema.
ORDS.ENABLE_SCHEMA(
    p_enabled             => TRUE,
    p_schema              => 'STRAVA',
    p_url_mapping_type    => 'BASE_PATH',
    p_url_mapping_pattern => 'strava',
    p_auto_rest_auth      => FALSE
  );

Defining REST services

My application runs entirely within the database schema STRAVA. To set up my Strava subscription, I had to give my API Application a name in Strava - so I called it PlaceCloud.  Therefore, my REST services are in a module also called PlaceCloud
  ORDS.DEFINE_MDULE(
    p_module_name    => 'placecloud',
    p_base_path      => 'placecloud/'
  );
Finally, I have to create a template
  ORDS.DEFINE_TEMPLATE(
    p_module_name => 'placecloud',
    p_pattern     => 'event'
  );
Schema, Module and Template combine to define the path of the REST Service.  
  • The schema is STRAVA.
  • The module is placecloud.
  • The template pattern is event.
That is appended to the public access URL that I got from the OCI console above. Thus, the complete URL of my REST service is 
https://GE************9-GOFASTER1.adb.uk-london1.oraclecloudapps.com/ords/strava/placecloud/event.

When I request to create a Strava Webhook event subscription, Strava sends a callback to a URL I specify to validate the request.  It will be the URL of my REST service.  I can simulate the callback for testing with curl.

curl -i -S -X GET -H "Content-Type: application/json" "https://GE************9-GOFASTER1.adb.uk-london-1.oraclecloudapps
.com/ords/strava/placecloud/event?hub.verify_token=MyPlaceCloud&hub.challenge=abc123&hub.mode=subscribe"
NB: Double quotes, especially around HTTPS URLs, are important; otherwise, the & will be interpreted by the command line, and you will get an error from the REST service!

Get Handler

The REST service for GET calls an anonymous PL/SQL block that calls my packaged procedure, passing parameters from the query string.  A status code and JSON response are returned.
  ORDS.DEFINE_HANDLER(
    p_module_name => 'placecloud',
    p_pattern     => 'event',
    p_method      => 'GET',
    p_source_type => ORDS.SOURCE_TYPE_PLSQL,  
    p_source      => q'[
      DECLARE
        l_status_code NUMBER;
        l_response    VARCHAR2(200 CHAR);
        l_message     CLOB;
      BEGIN 
        strava.webhook_pkg.handle_get(:hub_challenge,:hub_verify_token, l_response, l_status_code, l_message); 
        owa_util.status_line(l_status_code, l_message, FALSE);
        owa_util.mime_header('application/json', FALSE);
        :status_code := l_status_code;
        owa_util.http_header_close;
        htp.p(l_response);
        :response := l_response;
      END;
      ]',
    p_mimes_allowed => 'application/json',
    p_items_per_page => 0
  );

Handling Parameters

Strava puts three parameters in the callback.  Their names have a dot (".") in them.  I have to map them to a bind variable in the PL/SQL block named without the dot.
  ORDS.DEFINE_PARAMETER(
    p_module_name        => 'placecloud',
    p_pattern            => 'event',
    p_method             => 'GET',
    p_name               => 'hub.challenge',
    p_bind_variable_name => 'hub_challenge',
    p_source_type        => 'URI',
    p_param_type         => 'STRING',
    p_access_method      => 'IN'
  );
  ORDS.DEFINE_PARAMETER(
    p_module_name        => 'placecloud',
    p_pattern            => 'event',
    p_method             => 'GET', 
    p_name               => 'hub.verify_token',
    p_bind_variable_name => 'hub_verify_token',
    p_source_type        => 'URI',
    p_param_type         => 'STRING',
    p_access_method      => 'IN'
  );
  ORDS.DEFINE_PARAMETER(
    p_module_name        => 'placecloud',
    p_pattern            => 'event',
    p_method             => 'GET',
    p_name               => 'hub.mode',
    p_bind_variable_name => 'hub_mode',
    p_source_type        => 'URI',
    p_param_type         => 'STRING',
    p_access_method      => 'IN'
  );

Post Handler

Having created the subscription, Strava sends an HTTP POST request to the same URL with a JSON body every time an activity is created, updated or deleted.
  ORDS.DEFINE_HANDLER(
    p_module_name => 'placecloud',
    p_pattern     => 'event',
    p_method      => 'POST',
    p_source_type => ORDS.SOURCE_TYPE_PLSQL,
    p_source      => q'[
      BEGIN strava.webhook_pkg.handle_post(:body_text,:status_code); END;
      ]'
  );
I simply pass the message body to my procedure and return a status code.  See Loading and Processing JSON with PL/SQL.  Again, I can test this with curl.  I can put a sample JSON document in a file,
{
"object_type":"activity",
"object_id":123,
"aspect_type":"create",
"owner_id":999,
"event_time":1700000000
}
and post the file with curl.
curl -i -S -X POST --data-ascii @C:\temp\resttest.json -H "Content-Type: application/json" \
"https://GE************9-GOFASTER1.adb.uk-london-1.oraclecloudapps.com/ords/strava/placecloud/event"

Further Reading

ThatJeffSmith has several very helpful posts: REST APIs for Oracle Database, everything you need to know.

Wednesday, April 08, 2026

Loading GeoJSON Format GeoSpatial Data into Oracle Autonomous Database

This blog is part of a series about aspects and features of Oracle 26 and Autonomous Transaction Processing Database.

As a training exercise, I have created a database application that loads GPS data from activities logged on Strava and compares them with geospatial data to identify the named places visited by the activity. This geospatial data is publicly available in many places, often from government, and is usually freely available, at least for non-commercial purposes.

ESRI Shapefiles

When I first wrote the spatial processing in 2020, I used spatial data formatted as ESRI shapefiles.  These were then converted to Oracle spatial geometries and loaded into a database using the oracle.spatial.util: Class SampleShapefileToJGeomFeature Java conversion.  
See also

It reads attributes from a dBase file and geometries from a Shapefile, and then writes them to a database table.  I was able to run it on the database server.

export clpath=$ORACLE_HOME/suptools/tfa/release/tfa_home/jlib/ojdbc5.jar:$ORACLE_HOME/md/jlib/sdoutl.jar:$ORACLE_HOME/md/jlib/sdoapi.jar
java -cp $clpath oracle.spatial.util.SampleShapefileToJGeomFeature -h <dbhost> -p 1521 -sn oracle_pdb -u strava -d strava \
     -t $table -f $base -r 4326 -g geom
However, this approach is not suitable for an autonomous database.  ADB cannot access the local file system, there is no host execution access, and it does not allow external libraries (JARs) required by the Shapefile utility.  It would have to be run on another host.  It only makes a simple JDBC connection and cannot handle the ADB wallet out of the box.

GeoJSON

I have found that many spatial data sets are now available as GeoJSON, often as well as a shapefile.  GeoJSON is much easier to handle, especially in the Autonomous database, because it can be processed entirely within the database.

Often, GeoJSON can be downloaded directly into a database with an HTTP call.  For example, the definitions of Areas of Outstanding Natural Beauty (AONBs) in England can be downloaded as GeoJSON file from the UK Government Planning Data website.  I can make the HTTP call to download the file directly from PL/SQL.  I usually load the JSON into a staging table before trying to convert it to a spatial geometry.  The HTTP request is made by a packaged function strava_http.http_request.
TRUNCATE TABLE strava.stage_geo_data;
DECLARE 
  l_url VARCHAR2(1000) := 'https://files.planning.data.gov.uk/dataset/area-of-outstanding-natural-beauty.geojson';
  l_clob CLOB;
BEGIN
  DBMS_LOB.createtemporary(l_clob, TRUE);
  l_clob:=strava_http.http_request(l_url,99);    
  INSERT INTO stage_geo_data (name, geo_json) VALUES ('Natural England', l_clob);
  DBMS_LOB.freetemporary(l_clob);
END;
/
SELECT x.*, length(geo_json) FROM stage_geo_data x;
However, sometimes the GeoJSON must be downloaded as a file, uploaded to OCI Bucket storage, and then read into the database from there.
DECLARE
  l_blob BLOB;
BEGIN
  -- Read the file from Object Storage into CLOB
  l_blob := DBMS_CLOUD.GET_OBJECT
            (credential_name => 'OBJECT_STORE_CRED'
            ,object_uri      => 'https://objectstorage.uk-london-1.oraclecloud.com/n/l**********a/b/bucket-gofaster1/o/ch0.json'
            );
  INSERT INTO strava.stage_geo_data(name, geo_json) VALUES ('Switzerland', l_blob);
  COMMIT;
END;
/

Converting Shapefiles to GeoJSON 

Some providers still prefer to make only Shapefiles available and not GeoJSON (for example, the Swiss Federal Office of Topography - swisstopo).  However, it is simple to convert the shapefiles to GeoJSON with the ogr2ogr utility available as a part of GDAL (a translator library for various geospatial data formats).  On Windows, I use OSGeo4W and run org2ogr within that.

Many countries have their own geoid and coordinate systems.  It can be more accurate over a limited region.

  • In the UK, the Ordnance Survey uses EPSG 27700 – British National Grid for Great Britain.
  • The Republic of Ireland and Northern Ireland both use EPSG 2157 - Irish Transverse Mercator
  • In Switzerland, the Federal Office of Topography uses EPSG 2056 (Swiss CH1903+ / LV95) and 5728 (LN02 Height)
  • Etc.
I can convert a shapefile to a GeoJSON, simultaneously changing the spatial reference identifier (SRID) to WGS84 (also known as EPSG 4326).
ogr2ogr -f GeoJSON swissBOUNDARIES3D_1_5_TLM_LANDESGEBIET.geojson swissBOUNDARIES3D_1_5_TLM_LANDESGEBIET.shp -t_srs EPSG:4326
Then I can proceed with the physical JSON file as previously described.

Converting GeoJSON to an Oracle Spatial Geometry

GeoJSON is just a JSON document, but it is structured in a particular way.  
  • It supports Point, LineString, Polygon, MultiPoint, MultiLineString, and MultiPolygon geometries. 
  • Geometric objects with additional properties are Feature objects. 
  • Sets of features are contained by FeatureCollection objects.
  • The specific fields in properties can vary.  I usually create a PL/SQL script specific to each GeoJSON to be loaded.
The sample below is taken from a GeoJSON of French departments.  The properties contain the department code, the name and the region number they are within.  There is no SRID; the data is already in WGS84.
{"type":"FeatureCollection"
,"features":
  [
    {"type":"Feature"
    ,"properties":
      {"code":"01","nom":"Ain","region":"84"}
    ,"geometry":
      {"type":"Polygon"
      ,"coordinates":[[[5.825,45.939],...,[5.825,45.939]]]
      }
    }
,   {"type":"Feature"
    ,"properties":
      {"code":"02","nom":"Aisne","region":"32"}
    ,"geometry":
      {"type":"Polygon"
      ,"coordinates":[[[3.987,49.379],...,[3.987,49.379]]]
      }
    }
…
  ]
}
This PL/SQL parses the GeoJSON, extracting the descriptive data.  The geometry data is converted with Oracle's SDO_UTIL.FROM_GEOJSON function.
DECLARE 
  l_clob        CLOB;
  j_root        JSON_OBJECT_T;
  j_crs         JSON_OBJECT_T;
  j_features    JSON_ARRAY_T;
  j_feature     JSON_OBJECT_T;
  j_properties  JSON_OBJECT_T;
  j_geometry    JSON_OBJECT_T;

  l_geom        MDSYS.SDO_GEOMETRY;

  l_id         INTEGER;
  l_srid       VARCHAR2(10 char);
  l_name       VARCHAR2(100 char);
  
  e_json_syntax_error  EXCEPTION;
  PRAGMA exception_init(e_json_syntax_error,-40441);
BEGIN 
  SELECT geo_json INTO l_clob FROM strava.stage_geo_data WHERE name = 'France Regions';
  l_clob := strava_http.clean_clob(l_clob);
  --strava_http.pretty_json(l_clob);
  j_root := JSON_OBJECT_T.parse(l_clob);

  j_crs := j_root.get_object('crs');
	  
  j_features := j_root.get_array('features');
  --l_srid     := REGEXP_SUBSTR(j_crs.get_object('properties').get_string('name'),'[^:]+',1,2);
  l_srid := 4326;
  FOR i IN 0 .. j_features.get_size - 1 LOOP
    j_feature := TREAT(j_features.get(i) AS JSON_OBJECT_T);
    j_properties := j_feature.get_object('properties');
    IF j_properties IS NULL THEN 
      dbms_output.put_line('J_PROPERTIES is null');
      l_id := i;
    ELSE
      l_clob := j_properties.to_clob;
      strava_http.print_clob(l_clob); --use this to understand new GeoJSON file properties, but comment it out later
      l_id               := j_properties.get_number('code');
      l_name             := j_properties.get_string('nom');
      dbms_output.put_line(l_id||', '||l_srid||', '||l_name);
    END IF;
	
    j_geometry := j_feature.get_object('geometry');
    IF j_geometry IS NULL THEN
      dbms_output.put_line(l_name||': j_geometry is null');
      l_geom := NULL;
    ELSE
      l_clob := j_geometry.to_clob /*Get coordinates array*/; 
      
      BEGIN
        l_geom := sdo_util.from_geojson(l_clob);
      EXCEPTION 
        WHEN e_json_syntax_error THEN
          dbms_output.put_line(sqlerrm||' during parse of '||l_name||' ('||l_id||'). Switch to own function.' );
          l_geom := strava_sdo.build_sdo_geometry_from_geojson(j_geometry, l_srid);
      END;

      l_geom.SDO_SRID := TO_NUMBER(l_srid);
      IF l_srid != 4326 THEN
        l_geom := SDO_CS.TRANSFORM(l_geom, 4326);
        l_geom.SDO_SRID := 4326;
      END IF;
      l_geom := sdo_util.rectify_Geometry(SDO_CS.MAKE_2D(l_geom),0.001);
    END IF;
    
    dbms_output.put_line(i||','||l_id||','||l_name);
    INSERT INTO stage_my_areas (area_code, area_number, name, geom)
    VALUES ('REG', l_id, l_name, l_geom);
  END LOOP;
  COMMIT;
END;
/
The resulting spatial geometry is written to a staging table.  From there, I can move it to where I finally want it.

Working Around Errors in SDO_UTIL.FROM_GEOJSON

The SDO_UTIL.FROM_GEOJSON function converts a GeoJSON object (or more specifically, a geometry object in GeoJSON format) to a Spatial geometry object.  It was introduced in Oracle 12.2.  However, even in Oracle 26, I have occasionally experienced errors with this function with certain data sets. 
I am working with publicly available data sets.  Therefore, I will not easily get any problem resolved, even if I could identify the exact cause.  My workaround has been to use my own function  (build_sdo_geometry_from_geojson).  It loads the array of coordinates from the JSON and passes it to Oracle's sdo_geometry constructor to create the geometry. Though I have found that my function always succeeds when from_geojson fails, my function is much slower.  Therefore, I only use it in an exception handler when I get an error from Oracle's function.

For example, I experienced ORA-40441 when loading data for Co. Carlow from the Irish government's Open Data Unit (I don't know why), but not for any of the other 25 counties.

REM ireland_counties_load.sql 
…
DECLARE 
…
  e_json_syntax_error  EXCEPTION;
  PRAGMA exception_init(e_json_syntax_error,-40441);
BEGIN 
…
  BEGIN
    l_clob := j_geometry.to_clob /*Get coordinates array*/; 
…
    l_geom := sdo_util.from_geojson(l_clob);
  EXCEPTION 
    WHEN e_json_syntax_error THEN
      dbms_output.put_line(sqlerrm||' during parse of '||l_name||' ('||l_id||'). Switch to own function.');
      l_geom := strava_sdo.build_sdo_geometry_from_geojson(j_geometry, l_srid);
  END;
…
END;
/
In the log below, you can see that the error was reported, but then the alternative function was successful.
{"OBJECTID":16,"CO_ID":"10000","ENGLISH":"CARLOW","GAEILGE":"Ceatharlach","LOGAINM_ID":"100004","GUID":"2ae19629-143d-13a3-e055-000000000001"
,"CONTAE":"Ceatharlach","COUNTY":"CARLOW","PROVINCE":"Leinster","CENTROID_X":680448.23,"CENTROID_Y":660624.58,"AREA":896306186.01}
10000, 2157, Carlow, Co. Carlow, Province:Leinster, Centre:-6.80998032929769,52.690783228944
ORA-40441: JSON syntax error during parse of Carlow. Switch to own function.
ID:10000, Province:Leinster, County:Carlow, Council:Carlow, Centre:-6.82336146940769,52.7293094021289, 14445 points

Tuesday, March 31, 2026

Making HTTP Requests from an Oracle Database

This blog is part of a series about aspects and features of Oracle 26 and Autonomous Transaction Processing Database.

I have created a training project that integrates Strava (an application that tracks athletes' activities – in my case, cycling) with an Oracle Autonomous database that then performs spatial data processing and sends the results back to Strava.  The Oracle database calls the Strava APIs in HTTP requests that either extract data from or send it back to Strava.  These calls are made with the UTL_HTTP package.

All web services must be secured.  Only HTTPS is permitted when using an Autonomous Database on a public endpoint.  The only allowed port is 443 when the Autonomous AI Database instance is on a public endpoint. 
External calls are made with the UTL_HTTP package.  It needs a certificate wallet.  Autonomous AI Database instance is preconfigured with an Oracle Wallet that contains more than 90 of the most commonly trusted root and intermediate SSL certificates.  

Access Control Lists (ACLs)

Before Oracle can call any external system, permission must be granted by creating an ACL.  The Strava APIs are all below https://www.strava.com/api/v3/.  The following script creates a new ACL with connect and http privileges for the STRAVA user in my database that runs my application, and assigns the privileges to www.strava.com.  On Autonomous Database, this script should be run by the ADMIN user.

BEGIN
  DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
    host => 'www.strava.com',
    upper_port => 443,
    lower_port => 443,
    ace  => xs$ace_type(
              privilege_list => xs$name_list('connect','http'),
              principal_name => 'STRAVA',
              principal_type => xs_acl.ptype_db));
END;
/
If there is no ACL, I will get an error when I try to make the HTTP call.
ORA-29273: HTTP request failed
ORA-24247: network access denied by access control list (ACL)

HTTP Redirects

Sometimes an HTTP request is redirected to another URL, which may be in another domain.  An ACL is needed to cover each redirection of the URL.  Otherwise, the HTTP request will fail.  
I have particularly encountered this when downloading a GeoJSON resource directly into PL/SQL, where the published URL redirects to the actual location.  Errors ORA-29273 and ORA-24247 are raised even though an ACL grants access to the published URL.  Either additional ACLs are required, and/or more widely defined ACLs are needed.
By default, UTL_HTTP will follow up to 3 redirections, but the maximum number of redirects can be limited to any value or disabled by setting the maximum to 0
  IF p_redirect >= 0 THEN --restrict http redirect - mainly for debug
    UTL_HTTP.set_follow_redirect(l_req, p_redirect);
  END IF;

The easiest way I have found to determine all the redirections is to start by disabling redirection, by setting the maximum number of redirects to 0, and then look at the body returned from the HTTP request for the redirected URL.  Add the new ACL for this address, increment the number of redirections in SET_FOLLOW_REDIRECT, and repeat the HTTP request.  Repeat this process until the full request returns the requested item.

Take, for example, downloading county boundaries from the Irish Government's Open Data Unit at https://data.gov.ie/dataset/counties-national-statutory-boundaries-20191.
BEGIN
  DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
    host => '*.arcgis.com',
    ace  => xs$ace_type(
              privilege_list => xs$name_list('connect','http'),
              principal_name => 'STRAVA',
              principal_type => xs_acl.ptype_db));
END;
/

HTTP Requests

To extract information from Strava, I make a GET request to one of the Strava API endpoints.  The response is a JSON message in the body.  That is read in chunks of less than 32K into a CLOB variable.
Updates to Strava are made through PUT requests to the Strava API.  The information to be updated is put into the HTTP header.

REM strava_http.sql
…  
  l_req := UTL_HTTP.begin_request(p_url, p_req_type, 'HTTP/1.1');
  UTL_HTTP.set_header(l_req, 'Authorization', 'Bearer ' || g_access_token);
  utl_http.set_header(l_req, 'Accept-Charset', 'UTF-8');
  
  IF p_req_type = 'PUT' THEN
    UTL_HTTP.set_header(l_req, 'Content-Type', 'application/x-www-form-urlencoded');
    IF p_put_body IS NOT NULL THEN   -- Body
      l_header_body := escape_form_value(p_put_body);
      UTL_HTTP.set_header(l_req, 'Content-Length', LENGTH(l_header_body));
      UTL_HTTP.write_text(l_req, l_header_body);
    END IF;
  END IF;

  l_resp := UTL_HTTP.get_response(l_req);
…
  IF l_resp.status_code = 200 THEN
    NULL; --ok
  ELSIF l_resp.status_code = 401 THEN
    RAISE_APPLICATION_ERROR(-20401,'HTTP 401:Unauthorized');
…
  END IF;

  DBMS_LOB.createtemporary(l_clob, TRUE);
  LOOP
    DECLARE 
      l_buf VARCHAR2(32767);
    BEGIN
      UTL_HTTP.read_text(l_resp, l_buf, 32767);
    EXCEPTION WHEN UTL_HTTP.end_of_body THEN EXIT; 
    END;
  END LOOP;
    
  UTL_HTTP.end_response(l_resp);
…

Reading Headers

It is possible to access the HTTP response header much as you would an array.
/*list all headers*/
  FOR i IN 1 .. UTL_HTTP.get_header_count(l_resp) LOOP
    UTL_HTTP.get_header(l_resp, i, l_header_name, l_header_value);
    DBMS_OUTPUT.put_line(i ||':'|| l_header_name || ':' || l_header_value);
  END LOOP;
Strava returns a lot of information in the header or the HTTP response.  
  • The status of the HTTP request can be obtained from the response structure, but it is also recorded in the header.
  • Strava imposes usage limits.  By default, I can make 100 read calls within 15 minutes, and 1000 in a day.  The current usage counts and limits are reported in the header every time the Strava API is called.  My application tracks them to stop some processes from making too many requests.  
  • Other items are added by AWS (Strava's host).  x-Amz-Cf-Pop:LHR86-P2 indicates that my request to Strava was served by a CloudFront edge server in London (Heathrow area), specifically node 86, partition P2.
GET https://www.strava.com/api/v3/gear/b993101
1:Content-Type:application/json; charset=utf-8
2:Transfer-Encoding:chunked
3:Connection:close
4:Date:Thu, 26 Mar 2026 20:58:11 GMT
5:x-envoy-upstream-service-time:5779
6:server:istio-envoy
7:status:200 OK
8:x-ratelimit-usage:1,901
9:x-ratelimit-limit:200,2000
10:cache-control:max-age=0, private, must-revalidate
11:vary:Origin
12:referrer-policy:strict-origin-when-cross-origin
13:x-permitted-cross-domain-policies:none
14:x-xss-protection:1; mode=block
15:x-request-id:4a357a89-2935-4f41-9fa8-df0f112b804d
16:x-readratelimit-limit:100,1000
17:x-download-options:noopen
18:etag:W/"3c69eb05224d9014b96fb818f43215d7"
19:x-frame-options:DENY
20:x-readratelimit-usage:1,901
21:x-content-type-options:nosniff
22:X-Cache:Miss from cloudfront
23:Via:1.1 933ed3357b8f85661e4d84ebef8a63a8.cloudfront.net (CloudFront)
24:X-Amz-Cf-Pop:LHR86-P2
25:X-Amz-Cf-Id:_dx6YiTmtNagUoazFjSSXzM4nvFG2NEp_vKm0O2kJWmmAYYX4Z_EHg==
Specific named header values can be read directly without looping through the entire header.  I use this to read the usage counts and limits.
e_http_request_failed EXCEPTION;
PRAGMA EXCEPTION_INIT(e_http_request_failed,-29273);
…
-- Read usage limit headers
    BEGIN
      UTL_HTTP.get_header_by_name(l_resp, 'x-readratelimit-limit', l_header_value);
      IF l_header_value IS NOT NULL THEN
        g_short_read_limit := REGEXP_SUBSTR(l_header_value, '[^,]+', 1, 1);
        g_long_read_limit  := REGEXP_SUBSTR(l_header_value, '[^,]+', 1, 2);
      END IF;
    EXCEPTION WHEN e_http_request_failed THEN NULL;
    END;
The application logs and reports these limits.
API Log:15-min read usage: 1/100, 15-min all usage: 1/200, daily read usage: 901/1000, daily all usage: 901/2000