Move data and schema from Amazon Redshift clusters into an Oracle database with minimal downtime.
Oracle offers advanced PL/SQL, partitioning, and on-prem controls that some enterprises require for regulatory or latency reasons. Migrating keeps analytics running where governance teams feel safest.
Provision an Oracle instance with matching or larger storage, create Oracle users/tablespaces, open network paths, and set up an S3 bucket plus IAM role that Redshift can UNLOAD to. Install Oracle SQL*Loader or set up external tables.
For bulk historical data use UNLOAD
from Redshift to S3 in parallel CSV files, then SQL*Loader (or external tables) into Oracle. For change-data-capture use AWS DMS with "full load + ongoing replication" to keep inserts, updates, and deletes flowing until cut-over.
DMS converts Redshift SUPER
or GEOMETRY
columns to CLOB or JSON in Oracle. Validate serialization early and adjust column types if needed.
Downtime only occurs during the final cut-over when replication lag is zero and the Oracle application is switched on. Keep this window short by running continuous replication beforehand.
Run pg_dump -s
against Redshift, convert PostgreSQL DDL to Oracle syntax with tools like ora2pg
, and run the generated DDL in Oracle.
Execute parallel UNLOAD
commands for each table (example below). Use FORMAT AS CSV
or PARQUET
for performance.
In Oracle, create either external tables that read the S3 files via Oracle Cloud Storage Gateway or download the files locally and build SQL*Loader control files.
Insert data into staging tables with identical structure, then MERGE
or INSERT /*+ APPEND */
into production tables for minimal redo generation.
Start AWS DMS tasks with the same table mappings; let them replay ongoing changes while you validate row counts and spot-check aggregates.
Pause writes on Redshift, allow DMS to reach zero lag, switch application connection strings to Oracle, and decommission Redshift after final sign-off.
Unload with gzip compression to cut transfer time, match Redshift DISTKEY/SORTKEY with Oracle partitioning where possible, and always migrate in schema-by-schema batches to simplify troubleshooting.
Wrong varchar sizes: Redshift VARCHAR(65535)
becomes Oracle CLOB
; redefine to realistic lengths.
Ignoring sort order: Queries can slow if Oracle lacks indexes mirroring Redshift SORTKEY; add B-tree or bitmap indexes.
Views must be recreated manually or by scripts like ora2pg
; verify each Oracle object references compatible syntax.
Create Oracle sequences starting at MAX(id)+1
from each Redshift table or switch to IDENTITY columns in Oracle 12c+.
Spectrum tables reference S3 data, not Redshift storage. Export underlying files or repoint Oracle external tables to the same S3 location.