'csv not populating in s3 bucket
i have a function set up that will convert my df into a csv and pushes it to s3 (it does not save locally i am wanting it to save to the s3 bucket)
def to_csv_s3(name, addy, zip_code):
zip_array = []
for x in range(len(name)):
zip_array.append(zip_code)
df = pd.DataFrame()
df['zip_code'] = zip_array
df['name'] = name
df['address'] = addy
print(df)
bucket = 'jcalkins-source'
file_name = f'{zip_code}.csv'
s3_client = boto3.client(
"s3",
aws_access_key_id = access_key_id,
aws_secret_access_key = secret_access_key
)
with io.StringIO() as buffer:
df.to_csv(buffer, index=False)
response = s3_client.put_object(
Bucket=bucket, Key='jcalkins_source/csv/{zip_code}.csv', Body = buffer.getvalue()
)
status = response.get("ResponseMetadata", {}).get("HTTPStatusCode")
if status == 200:
print(f"Successful S3 put_object response. Status - {status}")
else:
print(f"Unsuccessful S3 put_object response. Status - {status}")
after the function runs i set it up to tell me the response status. Whenever i run it i get status 200 but theres no file in my bucket.
Solution 1:[1]
i figured it out. where you set key equal to a file path if you just want it to go straight into the bucket and not go into any folders you only need to put
Key='file_name.csv'
if you are wanting a specific folder you can specify that before the file
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Jessica Calkins |
