'How to upload larger than 5Tb object to Google Cloud Storage?

Trying to save a PostgreSQL backup (~20 Tb) to Google Cloud Storage for the long-term, and I am currently piping PostgreSQL pg_dump() command to an streaming transfer through gsutil.

pg_dump -d $DB_NAME -b --format=t \
    | gsutil cp - gs://$BUCKET_NAME/$BACKUP_FILE

However, I am worried that the process will crash because of GCS' 5Tb object size limit.

Is there any way to upload larger than 5Tb objects to Google Cloud Storage?

EDITION: using split?

I am considering to pipe pg_dump to Linux's split utility and the gsutil cp.

pg_dump -d $DB -b --format=t \
    | split -b 50G - \
    | gsutil cp - gs://$BUCKET/$BACKUP

Would something like that work?



Solution 1:[1]

As mentioned by Ferregina Pelona, guillaume blaquiere and John Hanley. There is no way to bypass the 5-TB limit implemented by Google, as mentioned in this document:

Cloud Storage 5TB object size limit

Cloud Storage supports a maximum single-object size up to 5 terabytes. If you have objects larger than 5TB, the object transfer fails for those objects for either Cloud Storage or Transfer for on-premises.

If the file surpasses the limit (5 TB), the transfer fails.

You can use Google's issue tracker to request this feature, within the link provided, you can check the features that were requested or request a feature that satisfies your expectations.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1