'Aws batch how to change the storage capacity
I have some tar.gz archive on S3 glacier deep archive. I would like to decompress them as soon as they are restored automatically, so I thought of using a lambda function which launches a batch job (some archives are too big to be decompressed directly with lambda). What I would like to know is if there is a simple way to modify the temporary storage space accessible to my job? I saw that i can add ebs storage with a launch template, but the idea would be to have an amount of storage that is adapted to each archive. In other words have a lambda function that retrieves the size of the archive and launches the batch job with enough space. I specify that the size of the archives is very variable, from a few Go to a few To.
Solution 1:[1]
Probably EFS would be the easiest:
Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with your AWS Batch jobs.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Marcin |
