'ways to copy BigTable table without affecting the read latency
I am trying to copy BigTable table from one instance to other but it seems like there is no direct way to do it.
I am exploring Dataflow jobs that export to GCS then to BigTable, but during the export process, I am afraid that this might affect the read latency of the BigTable source table. Is there any way to copy without affecting the performance of the source table? The source table is production data that gets high traffic.
Solution 1:[1]
You can create a new cluster and have the Dataflow job read from it using an application profile with single-cluster routing. These reads won't affect your production traffic going to any other cluster. Once the Dataflow job is done you can delete the new cluster.
This falls roughly under the use case described here: https://cloud.google.com/bigtable/docs/replication-overview#batch-vs-serve
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Gary Elliott |
