How to Migrate from Redshift to Oracle in PostgreSQL

Galaxy Glossary

How do I migrate Amazon Redshift data to Oracle with minimal downtime?

Move data and schema from Amazon Redshift clusters into an Oracle database with minimal downtime.

Sign up for the latest in SQL knowledge from the Galaxy Team!
Welcome to the Galaxy, Guardian!
Oops! Something went wrong while submitting the form.

Description

Why move from Redshift to Oracle?

Oracle offers advanced PL/SQL, partitioning, and on-prem controls that some enterprises require for regulatory or latency reasons. Migrating keeps analytics running where governance teams feel safest.

What must be ready before starting?

Provision an Oracle instance with matching or larger storage, create Oracle users/tablespaces, open network paths, and set up an S3 bucket plus IAM role that Redshift can UNLOAD to. Install Oracle SQL*Loader or set up external tables.

Which migration strategy is fastest?

For bulk historical data use UNLOAD from Redshift to S3 in parallel CSV files, then SQL*Loader (or external tables) into Oracle. For change-data-capture use AWS DMS with "full load + ongoing replication" to keep inserts, updates, and deletes flowing until cut-over.

Can AWS DMS handle complex types?

DMS converts Redshift SUPER or GEOMETRY columns to CLOB or JSON in Oracle. Validate serialization early and adjust column types if needed.

Is downtime required?

Downtime only occurs during the final cut-over when replication lag is zero and the Oracle application is switched on. Keep this window short by running continuous replication beforehand.

Step-by-step migration checklist

1. Export schema

Run pg_dump -s against Redshift, convert PostgreSQL DDL to Oracle syntax with tools like ora2pg, and run the generated DDL in Oracle.

2. Bulk unload data

Execute parallel UNLOAD commands for each table (example below). Use FORMAT AS CSV or PARQUET for performance.

3. Create external tables or control files

In Oracle, create either external tables that read the S3 files via Oracle Cloud Storage Gateway or download the files locally and build SQL*Loader control files.

4. Load into staging tables

Insert data into staging tables with identical structure, then MERGE or INSERT /*+ APPEND */ into production tables for minimal redo generation.

5. Enable CDC

Start AWS DMS tasks with the same table mappings; let them replay ongoing changes while you validate row counts and spot-check aggregates.

6. Cut over

Pause writes on Redshift, allow DMS to reach zero lag, switch application connection strings to Oracle, and decommission Redshift after final sign-off.

Best practices

Unload with gzip compression to cut transfer time, match Redshift DISTKEY/SORTKEY with Oracle partitioning where possible, and always migrate in schema-by-schema batches to simplify troubleshooting.

Common mistakes and fixes

Wrong varchar sizes: Redshift VARCHAR(65535) becomes Oracle CLOB; redefine to realistic lengths.
Ignoring sort order: Queries can slow if Oracle lacks indexes mirroring Redshift SORTKEY; add B-tree or bitmap indexes.

Why How to Migrate from Redshift to Oracle in PostgreSQL is important

How to Migrate from Redshift to Oracle in PostgreSQL Example Usage


-- Example: migrate Orders table
-- 1. Redshift unload
UNLOAD ('SELECT * FROM orders')
TO 's3://my-migration-bucket/orders_'
IAM_ROLE 'arn:aws:iam::123456789012:role/RedshiftUnloadRole'
FORMAT AS CSV GZIP;

-- 2. Oracle external table
CREATE TABLE orders_stage (
  id           NUMBER,
  customer_id  NUMBER,
  order_date   DATE,
  total_amount NUMBER(12,2)
)
ORGANIZATION EXTERNAL (
  TYPE ORACLE_LOADER
  DEFAULT DIRECTORY redshift_files
  ACCESS PARAMETERS (
    RECORDS DELIMITED BY NEWLINE
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
  )
  LOCATION ('orders_0000_part_00.gz')
);

-- 3. Merge into production table
MERGE INTO orders tgt
USING orders_stage src
ON (tgt.id = src.id)
WHEN NOT MATCHED THEN
  INSERT VALUES (src.id, src.customer_id, src.order_date, src.total_amount)
WHEN MATCHED THEN
  UPDATE SET tgt.total_amount = src.total_amount,
             tgt.order_date = src.order_date;

How to Migrate from Redshift to Oracle in PostgreSQL Syntax


-- Redshift: unload data
UNLOAD ('SELECT * FROM customers')
TO 's3://my-migration-bucket/customers_'
IAM_ROLE 'arn:aws:iam::123456789012:role/RedshiftUnloadRole'
FORMAT AS CSV GZIP
ALLOWOVERWRITE PARALLEL;

-- Oracle: create external stage table
CREATE TABLE customers_stage (
  id           NUMBER,
  name         VARCHAR2(100),
  email        VARCHAR2(100),
  created_at   DATE
)
ORGANIZATION EXTERNAL (
  TYPE ORACLE_LOADER
  DEFAULT DIRECTORY redshift_files
  ACCESS PARAMETERS (
    RECORDS DELIMITED BY NEWLINE
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
  )
  LOCATION ('customers_0000_part_00.gz')
);

-- Load into target table
INSERT /*+ APPEND */ INTO customers
SELECT * FROM customers_stage;

Common Mistakes

Frequently Asked Questions (FAQs)

Can I migrate Redshift views to Oracle?

Views must be recreated manually or by scripts like ora2pg; verify each Oracle object references compatible syntax.

How do I keep sequences in sync?

Create Oracle sequences starting at MAX(id)+1 from each Redshift table or switch to IDENTITY columns in Oracle 12c+.

What about Redshift spectrum external tables?

Spectrum tables reference S3 data, not Redshift storage. Export underlying files or repoint Oracle external tables to the same S3 location.

Want to learn about other SQL terms?

Trusted by top engineers on high-velocity teams
Aryeo Logo
Assort Health
Curri
Rubie
BauHealth Logo
Truvideo Logo
Welcome to the Galaxy, Guardian!
Oops! Something went wrong while submitting the form.