Connect Amazon Redshift as a target in dbt so you can compile, test, and run transformation models directly on your warehouse.
dbt automates SQL transforms, while Redshift supplies elastic storage and compute. Linking them lets teams version code, enforce tests, and deploy production pipelines in minutes.
Create a Redshift user with CREATE and USAGE on target schemas, whitelist your IP (or use an SSH tunnel), and install the adapter: pip install dbt-redshift
.
Add a profile to ~/.dbt/profiles.yml
. Each key below is required unless noted.
my_redshift_project:
target: dev
outputs:
dev:
type: redshift # adapter name
host: redshift-cluster.abc123.us-east-1.redshift.amazonaws.com
port: 5439
user: analytics
password: "{{ env_var('REDSHIFT_PASSWORD') }}"
dbname: analytics
schema: public
threads: 4
sslmode: require
ra3_node: false # set true for RA3 clusters
search_path: "$user,public"
Run dbt debug --target dev
. A "PASSED" message confirms credentials, network, and permissions are correct.
Compile and run with dbt run --select customers orders order_items
. dbt creates tables or views in analytics.public
unless overridden in dbt_project.yml
.
Use environment variables for secrets, limit threads to your WLM queue, adopt incremental models for large fact tables, and grant model schemas to downstream BI roles.
Wrong schema: Leaving schema: public
in profiles.yml when production tables live elsewhere causes run failures. Update the schema or set generate_schema_name
macro.
Missing privileges: Forgetting to grant CREATE on target schemas leads to permission errors. Grant the role or switch to a schema you own.
No. Define multiple outputs under the same profile (e.g., dev
, prod
) and switch with --target
.
Yes. Supply auth_type: iam
and include cluster_id
plus a valid AWS profile or role.
Add materialized='incremental'
to your model config. dbt will merge only new or changed rows, reducing cost and runtime.