'Maximum file size from S3 a Lambda can open using open() method
I'm currently developing some lambdas to execute Python script on text files hosted on S3.
Those text files can be quite large (up to 1GB), as far as I know, Lambda has a 512Mb tmp directory, so I assume I can only load a 512MB file.
But I also read that it has up 10240MB function memory allocation.
So will I be able to open a 1GB file from S3 using the open() method?
If somebody can also give me some insights to a newbie about the difference of that tmp folder and memory ==> If the memory is 10GB why would one want to use the 512MB tmp folder?
Thanks a lot!
Have a great 2022 year
Solution 1:[1]
You can use regular get_object, without the need to write it to /tmp:
s3 = boto3.client('s3')
def lambda_handler(event, context):
response = s3.get_object(
Bucket='your-bucket',
Key='your-key'
)
# get the content of the file as bytes
text_bytes = response['Body'].read()
# change it to string
text_str = text_bytes.decode()
# process as you want the text_str
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Marcin |
