'How do i download files from a minio s3 bucket using curl

I am trying to download contents of a folder from a minio s3 bucket.

i am using the following command.

I am able to download a specific file using

# aws --endpoint-url http://s3:9000 s3 cp s3://mlflow/3/050d4b07b6334997b214713201e41012/artifacts/model/requirements.txt .

But the below throws an error if it try to download all the contents of the folder

# aws --endpoint-url http://s3:9000 s3 cp s3://mlflow/3/050d4b07b6334997b214713201e41012/artifacts/model/* . 
fatal error: An error occurred (404) when calling the HeadObject operation: Key "3/050d4b07b6334997b214713201e41012/artifacts/model/*" does not exist

Any help will be appreciated



Solution 1:[1]

I was finally able to get it by running

aws --endpoint-url http://s3:9000 s3 cp s3://mlflow/3/050d4b07b6334997b214713201e41012/artifacts/model . --recursive 

The one problem i ran into was i had to use the aws-cli using pip install as that's the only way i could the --recursive option to work.

Solution 2:[2]

You could also use the minio client and set an alias on minio command. Here is an example taken from the official documentation showing how to achieve this using the docker version of minio client:

docker pull minio/mc:edge
docker run -it --entrypoint=/bin/sh minio/mc
mc alias set <ALIAS> <YOUR-S3-ENDPOINT> [YOUR-ACCESS-KEY] [YOUR-SECRET-KEY] [--api API-SIGNATURE]

You can instead install the client on any OS.

Once done, to copy content from s3 you would only have to do this:

{minio_alias} cp {minio_s3_source} {destination}

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 A_K
Solution 2 Moro