Building a Custom Terraform Provider for Snowflake

Galaxy Glossary

How do I create a custom Terraform provider for Snowflake?

A custom Terraform provider for Snowflake is a Go-based plugin that extends Terraform’s resource graph to manage Snowflake objects (warehouses, databases, roles, etc.) through declarative code.

Sign up for the latest in SQL knowledge from the Galaxy Team!
Welcome to the Galaxy, Guardian!
You'll be receiving a confirmation email

Follow us on twitter :)
Oops! Something went wrong while submitting the form.

Description

Table of Contents

Overview

Terraform’s popularity stems from its pluggable architecture. If a target system does not have an official provider—or if your organization needs capabilities the official provider lacks—you can write your own. A custom Terraform provider for Snowflake lets platform teams provision databases, warehouses, roles, policies, and more using the same pipeline that manages cloud infrastructure.

Why You Might Need a Custom Provider

HashiCorp maintains the SnowflakeLabs/snowflake provider, but there are scenarios where teams prefer a custom fork or an entirely separate provider:

  • Enterprise Policies – You must embed opinionated naming rules, tagging, or security checks directly into provider logic.
  • Unsupported Objects – You need resource coverage for Snowflake objects that the community provider has not yet implemented.
  • Extension to Internal Tools – You want the provider to orchestrate proprietary tooling alongside Snowflake (e.g., ticket creation, auditing hooks).
  • Release Cadence Control – Regulated industries may need stricter change management than the public provider’s release schedule allows.

Prerequisites

Before diving in, make sure you have:

  • Go 1.21+
  • Terraform CLI 1.5+
  • GitHub account (or private Git host)
  • Snowflake account + ACCOUNTADMIN role for testing
  • Familiarity with Go modules and Terraform resource lifecycle

Provider Architecture Choices

Plugin SDK v2 vs. Terraform Plugin Framework

The Plugin SDK v2 is mature and battle-tested, but new development is encouraged to use the Terraform Plugin Framework. The framework offers generics, better testing primitives, and future-proofing for Terraform 1.x and beyond.

Package Layout

snowflake-provider/
├── main.go
├── go.mod
├── internal/
│ ├── client/
│ │ └── client.go
│ └── resources/
│ ├── database.go
│ └── warehouse.go
├── docs/
│ ├── resources/
│ │ ├── database.md
│ │ └── warehouse.md
└── examples/
└── basic_usage/
└── main.tf

Keeping resources in internal/ prevents accidental import by downstream projects, allowing you to change APIs freely.

Step-by-Step Guide

1. Scaffold the Module

mkdir snowflake-provider && cd snowflake-provider
go mod init github.com/<org>/snowflake-provider

2. Import Terraform Plugin Framework

go get github.com/hashicorp/terraform-plugin-framework@latest

3. Implement the Provider Skeleton (main.go)

package main

import (
"context"
"github.com/hashicorp/terraform-plugin-framework/providerserver"
"github.com/hashicorp/terraform-plugin-framework/diag"
"github.com/hashicorp/terraform-plugin-framework/types"
"github.com/hashicorp/terraform-plugin-framework/resource/schema"
"github.com/hashicorp/terraform-plugin-framework/provider"
)

type snowflakeProvider struct{}

func (p *snowflakeProvider) Metadata(_ context.Context, _ provider.MetadataRequest, resp *provider.MetadataResponse) {
resp.TypeName = "snowflake_custom"
}

func (p *snowflakeProvider) Schema(_ context.Context, _ provider.SchemaRequest, resp *provider.SchemaResponse) {
resp.Schema = schema.Schema{
Attributes: map[string]schema.Attribute{
"account": schema.StringAttribute{Required: true},
"username": schema.StringAttribute{Required: true},
"password": schema.StringAttribute{Required: true, Sensitive: true},
"role": schema.StringAttribute{Optional: true},
"warehouse": schema.StringAttribute{Optional: true},
},
}
}

func (p *snowflakeProvider) Configure(ctx context.Context, req provider.ConfigureRequest, resp *provider.ConfigureResponse) {
// Populate config values
var config struct {
Account types.String `tfsdk:"account"`
Username types.String `tfsdk:"username"`
Password types.String `tfsdk:"password"`
Role types.String `tfsdk:"role"`
Warehouse types.String `tfsdk:"warehouse"`
}
diags := req.Config.Get(ctx, &config)
resp.Diagnostics.Append(diags...)
if resp.Diagnostics.HasError() {
return
}
// Initialize Snowflake driver client here
}

func New() provider.Provider { return &snowflakeProvider{} }

func main() {
providerserver.Serve(context.Background(), New, providerserver.ServeOpts{
Address: "registry.terraform.io/acme/snowflake-custom",
})
}

4. Build a Resource (Database Example)

// internal/resources/database.go
package resources

import (
"context"
"database/sql"
"fmt"
"github.com/hashicorp/terraform-plugin-framework/resource"
"github.com/hashicorp/terraform-plugin-framework/resource/schema"
"github.com/hashicorp/terraform-plugin-framework/types"
)

type databaseResource struct {
db *sql.DB
}

func (r *databaseResource) Schema(_ context.Context, _ resource.SchemaRequest, resp *resource.SchemaResponse) {
resp.Schema = schema.Schema{
Attributes: map[string]schema.Attribute{
"name": schema.StringAttribute{Required: true},
"comment": schema.StringAttribute{Optional: true},
},
}
}

func (r *databaseResource) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
// Retrieve plan
var plan struct{ Name types.String `tfsdk:"name"`; Comment types.String `tfsdk:"comment"` }
req.Plan.Get(ctx, &plan)
query := fmt.Sprintf("CREATE DATABASE \"%s\" COMMENT = '%s'", plan.Name.ValueString(), plan.Comment.ValueString())
if _, err := r.db.ExecContext(ctx, query); err != nil {
resp.Diagnostics.AddError("Snowflake Error", err.Error())
return
}
resp.State.Set(ctx, &plan)
}

// Implement Read, Update, Delete similarly...

Register this resource in the provider’s Resources() method so Terraform recognizes it.

5. Versioning & Publishing

HashiCorp’s public registry requires a GitHub repo tagged with semantic versions (v0.1.0, v1.0.0, etc.) and a releases.json for each OS/arch build. Use goreleaser to automate builds and publishing:

brew install goreleaser
# .goreleaser.yaml config sets binary name, OS matrix, checksum, etc.
GITHUB_TOKEN=<PAT> goreleaser release --clean

6. Acceptance Tests

Terraform Plugin Framework provides testing helpers. Guard all resources with happy-path and destructive tests to maintain provider quality. Example skeleton:

func TestAccDatabase_Basic(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
ProviderFactories: testAccProviderFactories,
Steps: []resource.TestStep{
{
Config: testAccDatabaseConfig("acc_db", "from test"),
Check: resource.TestCheckResourceAttr("snowflake_custom_database.db", "name", "ACC_DB"),
},
},
})
}

Best Practices

  • Idiomatic Go – Follow Effective Go for error handling and naming.
  • Minimal State – Store only attributes Snowflake can return; avoid computed data you cannot read back.
  • Retry Patterns – Wrap calls with exponential back-off for transient network errors.
  • Pagination Handling – Use Snowflake’s SHOW commands carefully; large accounts require pagination logic.
  • CI/CD – Run go vet, go test, and tfplugindocs generation in pull requests.

Common Pitfalls and How to Avoid Them

Mismanaging Case Sensitivity

Snowflake folds unquoted identifiers to uppercase; your provider should normalize names or quote everything consistently.

Forgetting to Set Primary Identifiers

Terraform needs a stable id field. Compute it as a composite key (account|database|name) to avoid diffs.

Ignoring Dependency Ordering

Snowflake roles and grants depend on databases and warehouses. Expose ImportState and resource references so users can enforce ordering with depends_on.

Next Steps

After the first working resource, expand coverage incrementally: warehouses, roles, grants, network policies. Engage with HashiCorp’s #terraform-providers Slack for feedback and consider upstreaming your improvements to SnowflakeLabs if they are generic enough.

Key takeaway: A custom Terraform provider empowers DevOps and data teams to manage Snowflake objects in the same declarative workflow as cloud infrastructure, improving traceability and reducing manual administration.

Why Building a Custom Terraform Provider for Snowflake is important

Snowflake often sits at the heart of data platforms, yet its configuration drifts over time when managed manually or via scripts. Embedding Snowflake objects in Terraform lets engineering and data teams apply the same review, CI, and drift-detection processes they already trust for cloud infrastructure. A custom provider unlocks this power even when official providers lack required features or when enterprises need highly opinionated governance baked into provisioning logic.

Building a Custom Terraform Provider for Snowflake Example Usage


terraform {
  required_providers {
    snowflake-custom = {
      source  = "registry.terraform.io/acme/snowflake-custom"
      version = "~> 0.1"
    }
  }
}

provider "snowflake-custom" {
  account   = "xy12345.us-east-1"
  username  = "TF_ADMIN"
  password  = var.snowflake_password
  role      = "SYSADMIN"
  warehouse = "DEV_WH"
}

resource "snowflake-custom_database" "this" {
  name    = "APP_DB"
  comment = "Managed by Terraform"
}

Building a Custom Terraform Provider for Snowflake Syntax



Common Mistakes

Frequently Asked Questions (FAQs)

What SDK should I choose for a new provider?

HashiCorp recommends the Terraform Plugin Framework for net-new providers. It offers better type safety, testing, and future support than the legacy Plugin SDK v2.

How do I distribute binaries internally?

Use goreleaser to create platform-specific archives and store them in an internal registry (Artifactory, S3, or GitHub Releases). Configure TF_CLI_CONFIG_FILE so Terraform can find your private mirror.

Can I extend the official Snowflake provider instead?

Yes. Forking SnowflakeLabs/snowflake is often faster than starting from scratch. However, a greenfield provider can adopt the new framework, stricter lints, and organization-specific conventions without legacy constraints.

How do I run unit tests without a live Snowflake account?

Abstract the Snowflake driver behind an interface and inject a mock in unit tests. Reserve acceptance tests for real Snowflake interactions behind a guarded environment variable like SNOWFLAKE_ACC.

Want to learn about other SQL terms?

Trusted by top engineers on high-velocity teams
Aryeo Logo
Assort Health
Curri
Rubie Logo
Bauhealth Logo
Truvideo Logo
Welcome to the Galaxy, Guardian!
You'll be receiving a confirmation email

Follow us on twitter :)
Oops! Something went wrong while submitting the form.