'One Cloud Storage Bucket with sub-folders or one Cloud Storage Bucket per microservice
I am currently using a one bucket per microservice approach for my GCP Cloud Run Services. This means each service I have has one Cloud Storage Bucket that contains a default.tfstate file for its Cloud Run Service. I have multiple GCP project environments suffixed with development, staging, or production. Each Cloud Run Service has a variation of itself in my three GCP project environments. For example, my foo-service exists in GCP development (when it's being developed), staging (when I merge it to main, deploy it, test it, etc), and in production (when I say quality is sufficient enough to deploy live to production). This one service has three GCP Cloud Storage Buckets which I have to name in a way to avoid naming conflicts with other buckets.
Below you can see the structure I am using. Each bucket contains a default.tfstate file for Terraform.
GCP project name: my-project-prod
- Bucket: service-one-prod
- Bucket: service-two-prod
- Bucket: service-three-prod
GCP project name: my-project-stag
- Bucket: service-one-stag
- Bucket: service-two-stag
- Bucket: service-three-stag
GCP project name: my-project-dev
- Bucket: service-one-dev
- Bucket: service-two-dev
- Bucket: service-three-dev
Would there be an advantage to using one Cloud Storage Bucket per GCP project environment that contains sub-folders for each of my services, like this structure below? Each folder would contain the default.tfstate for Terraform.
GCP project name: my-project-prod
- Bucket: my-project-prod
- Folder: service-one
- Folder: service-two
- Folder: service-three
GCP project name: my-project-stag
- Bucket: my-project-stag
- Folder: service-one
- Folder: service-two
- Folder: service-three
GCP project name: my-project-dev
- Bucket: my-project-dev
- Folder: service-one
- Folder: service-two
- Folder: service-three
I am only using the bucket to store Terraform state for my Cloud Run Services. I am either going to create many buckets or many folders within one bucket.
Editing the original post for more context below.
I am in the early stages of my product development, so I am looking for best practices before I start to scale. I have five Cloud Run Services at this time with more to develop in the future. All the Cloud Run Services are in the us-central1 region and each is using the default Compute Engine service account for security. Each service has its own dedicated Cloud Storage Bucket, where the location is set to US (multiple regions) or us-central1. I am using the buckets to store the Cloud Run Service's default.tfstate, nothing more. My services are simple APIs that read/write data to/from mulitple sources, like other API's, databases, topics/queues.
I would say these services are simple. They are not pushing the limits of buckets or bucket access.
Solution 1:[1]
It's literally the same to use on bucket per service or just subfolders unless:
a) You have multiple services in multiple regions, in which case you want to ahve a bucket as near as possible to reduce latency and cost
b) You are using uniform bucket access policy as your security method, in which case you want lo limit the access of a service to its bucket
If you are just thinking about bucket vs subfolders performance there won't be a noticeable difference
Maybe if you tell us a bit more about your project (security, regions, workload) I can help you further
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Chris32 |
