'How to have multiple state files for a workspace in terraform gcs?
In AWS, when i set the backend provider to be S3, I'm able to set multiple keys. I can have one key for the database, another key for the cluster, etc.
key - (Required) Path to the state file inside the S3 Bucket. When using a non-default workspace, the state path will be /workspace_key_prefix/workspace_name/key (see also the workspace_key_prefix configuration).
Then in a module like my kube deployments, I can grab the remote state and use the cluster, separate from the DB, etc.
The GCS backend does not seem to support key.
https://www.terraform.io/language/settings/backends/gcs
It seems to dump everything in your workspace name like myworkspace.tfstate.
So when I go from one module to another, say from DB to Network, applying the terraform config, it will nuke everything from DB when I apply in network and vice versa in DB from network.
When I get to my deployments, I'm able to grab all the remote state by each key name so I'd have
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = local.tf-state-bucket-name
key = "workspaces/${terraform.workspace}/terraform-network.tfstate"
region = "us-west-1"
}
}
and
data "terraform_remote_state" "rds" {
backend = "s3"
config = {
bucket = local.tf-state-bucket-name
key = "workspaces/${terraform.workspace}/terraform-rds.tfstate"
region = "us-west-1"
}
}
I could then grab the DB username like so
data.terraform_remote_state.rds.outputs.db_username
I keep trying to implement my GCP setup DB/VPC/Network/GKE to look just like my AWS setup, but it just doesn't seem supported.
It seems that the key will always be the name of the workspace?
Would I instead of having in S3 a path like
workspaces/my-workspace/db.tfstate
in GCS, would i do like
db/my-workspace.tfstate
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
