Cross-region replication for Firestore exports is the practice of automatically copying your Cloud Firestore export files from a primary Cloud Storage bucket in one region to a bucket in another region to achieve disaster recovery, data residency, or latency goals.
Learn why and how to enable cross-region replication for your Cloud Firestore exports using dual-region buckets, Cloud Storage bucket replication, or manual copy pipelines, along with best practices, common pitfalls, and automation tips.
If your organization relies on Cloud Firestore as a critical transactional or analytical data store, you probably run gcloud firestore export
jobs to back up your collections to Cloud Storage. Those export files are essential for point-in-time restores, analytics in BigQuery, and compliance audits. Keeping a single copy in the same region as the production database leaves you vulnerable to regional outages, accidental bucket deletion, or location-specific compliance failures. Cross-region replication mitigates these risks by maintaining an up-to-date replica of your export data in a geographically distinct location.
When you run an export, Firestore serializes documents into sharded RANGE_*
and ALL_NAMESPACES_KIND_*
files, then writes them to a Cloud Storage bucket of your choice. Firestore itself has no concept of replication for exports—those objects are treated like any other Cloud Storage objects once written.
Cloud Storage offers two primary mechanisms to get your objects into multiple regions:
asia1
or us-east1+us-central1
) automatically keeps two copies with strong consistency.Both approaches satisfy “cross-region” requirements. The choice depends on latency, cost, control, and encryption needs.
This is the simplest path. Instead of exporting to gs://my-exports-single-region
, create a dual-region bucket (for example, us-central1
+ us-east1
) and point your export at it. Google handles replication transparently, and you see only one bucket in your project.
Pros: No extra configuration, strongly consistent, lowest operational overhead.
Cons: Region pairings are fixed; you can’t fine-tune policies like delete mirroring or encryption per destination.
Bucket replication gives you fine-grained control. You set a source bucket (in, say, us-central1
) and a destination bucket (in europe-west4
) and enable replication. New objects—including Firestore export shards—are automatically copied. You can also replicate deletes or preserve them.
High-level steps:
service-@gs-project-accounts.iam.gserviceaccount.com
) storage.admin
on that bucket.gcloud storage buckets update
, Terraform, or JSON API.After the initial configuration, any export placed in the source bucket is mirrored to the destination within seconds to minutes.
If regulatory or tooling constraints block official replication, you can build a Cloud Scheduler + Cloud Functions job that triggers after each export and uses gsutil rsync
or Storage Transfer Service to copy the objects. While functional, this approach is harder to maintain, slower to resolve new shards, and introduces custom code.
The following example demonstrates bucket replication (Approach #2) using the gcloud
CLI. Adjust region names to match your RTO/RPO requirements.
x-goog-replication-status
should read COMPLETE
.If you need scheduled exports, combine the replication with a Cloud Scheduler job:
# Cloud Scheduler, daily at 02:00 UTC
gcloud scheduler jobs create pubsub daily-firestore-export \
--schedule="0 2 * * *" --topic=firestore-export-trigger \
--message-body="{\"bucket\":\"my-firestore-backups-src\"}"
Your Cloud Function subscribes, runs gcloud firestore export
or the Admin SDK, and writes to the source bucket. Replication remains transparent.
storage.googleapis.com/storage/replication/object_replication_lag
reveals lag in seconds.protoPayload.metadata.replicationStatus
equals FAILED
.org-policy
that denies accidental deletion of destination buckets.Bucket Lock
if legal hold or immutability is required.storage.admin
on the destination bucket. Grant the role or create a custom role with storage.objects.create
.us-central1
to europe-west4
, use bucket replication instead.deleteOption=DELETE_MARKER
, the source’s lifecycle rules can propagate deletions. If you need longer retention in the DR region, set deleteOption=NONE
.Cross-region replication for Firestore exports is straightforward once you understand Cloud Storage’s replication models. Whether you choose dual-region buckets for simplicity or bucket replication for flexibility, the key is to integrate the configuration into your infrastructure-as-code, monitor replication health, and test restores regularly. Doing so turns your daily Firestore exports from nice-to-have backups into a robust disaster-recovery asset.
Without cross-region replication, a single regional outage or bucket misconfiguration can render your Firestore backups useless. Enabling replication ensures your export files survive disasters, satisfy data residency and compliance requirements, and remain quickly restorable no matter which Google Cloud region experiences issues.
A dual-region bucket replicates objects into two pre-defined regions and offers strong consistency. For most disaster-recovery scenarios, it is sufficient. Choose custom bucket replication only when you need specific region pairs, different encryption keys per copy, or more granular delete mirroring controls.
The export job writes only to the source bucket, so export speed is unaffected. You are charged for egress traffic from the source region to the destination region and for duplicated storage, but replication itself does not add API operation costs.
Use Cloud Monitoring metrics storage.googleapis.com/storage/replication/object_replication_lag
and set an alert if the lag exceeds your RPO threshold, e.g., 15 minutes.
Yes. Firestore’s gcloud firestore import
command can reference any bucket your service account can access, including the replicated one. Always test the restore path as part of your DR drills.