'Error: ENOENT: no such file or directory, open '/root/.aws/credentials' when run up on Docker Container

enter image description here

When run app locally it run normally but when i try to run app docker container i receive this error like on screen Its my Docker file:

FROM node:14.0.0
WORKDIR /app
ARG DATABASE_URL
ARG AWS_REGION
ARG CLIENT_ID
ARG USER_POOL_ID
ARG AWS_IOT_PUBLIC_TOPIC_NAME
ARG AWS_ACCESS_KEY_ID
ARG AWS_ACCESS_SECRET_KEY_ID
ARG AWS_ELASTIC_SERVICE_URL
ARG PORT
ARG IDENTIFY_POOL_ID
ARG MQTT_ENDPOINT

COPY package.json package-lock.json* ./
RUN npm install
COPY . /app

ENV DATABASE_URL=$DATABASE_URL
ENV REGION=$AWS_REGION
ENV CLIENT_ID=$CLIENT_ID
ENV USER_POOL_ID=$USER_POOL_ID
ENV AWS_IOT_PUBLIC_TOPIC_NAME=$AWS_IOT_PUBLIC_TOPIC_NAME
ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
ENV AWS_ACCESS_SECRET_KEY_ID=$AWS_ACCESS_SECRET_KEY_ID
ENV AWS_ELASTIC_SERVICE_URL=$AWS_ELASTIC_SERVICE_URL
ENV PORT=$PORT
ENV IDENTIFY_POOL_ID=$IDENTIFY_POOL_ID
ENV MQTT_ENDPOINT=$MQTT_ENDPOINT

RUN printenv

EXPOSE 5001
CMD npm run start


Solution 1:[1]

You can solve this by bind-mounting your credentials file into the container:

docker run -v $HOME/.aws:/root/.aws ...

I don't like doing this, because it means that something inside the container can change files in your directory in ways that you might not expect (and making them owned by root).

I prefer instead using environment variables to hold my credentials, and passing them to Docker using -e:

docker run -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_DEFAULT_REGION -e AWS_REGION ...

Note that these variables must already be defined in your environment to be passed as -e VARNAME (and you shouldn't use -e VARNAME=VALUE because that makes the keys visible with ps).

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Parsifal