'How to deal with Python's requirements.txt while using a Docker development environment?

Suppose I wrote a docker-compose.dev.yml file to set the development environment of a Flask project (web application) using Docker. In docker-compose.dev.yml I have set up two services, one for the database and one to run the Flask application in debug mode (which allows me to do hot changes without having to recreate/restart the containers). This allows everyone on the development team to use the same development environment very easily. However, there is a problem: it is evident that while developing an application it is necessary to install libraries, as well as to list them in the requirements.txt file (in the case of Python). For this I only see two alternatives using a Docker development environment:

  1. Enter the console of the container where the Flask application is running and use the pip install ... and pip freeze > requirements.txt commands.
  2. Manually write the dependencies to the requirements.txt file and rebuild the containers.

The first option is a bit laborious, while the second is a bit "dirty". Is there any more suitable option than the two mentioned alternatives?

Edit: I don't know if I'm asking something that doesn't make sense, but I'd appreciate if someone could give me some guidance on what I'm trying to accomplish.



Solution 1:[1]

If the goal is to have a consistent dev environment, the safest way I can think of would be to build a base image with the updated dependencies and publish to a private registry so that you can refer to a specific tag like app:v1.2. So the Dockerfile can look like:

FROM AppBase:v1.2
...

This means that there is no need to install the dependencies and results in a quicker and consistent dev env setup.

Solution 2:[2]

Install requirements in a virtualenv inside the container in an externally mounted volume. Note that the virtualenv creation and installation should happen in container run time, NOT in image building time (because there is no mounted volume).

Assuming you are already mounting (not copying!) your project sources, you can keep it in a ./.venv folder, which is a rather standard procedure.

Then you work just as you would locally: issue the install once when setting up the project for the first time, requirements need not be reinstalled unless requirements change, you can keep the venv even if the container is rebuilt, restarting the app does not reinstall the requirements every time, etc, etc.

Just don't exepect the virtualenv to be usable outside the container, e.g. by your IDE (but a bit of hacking with the site module would let you share the site-packages with a virtualenv for your machine)


This is a very different approach to how requirements are usually managed in production docker images, where sources and requirements are copied and installed in image building time. So you'll probably need two very different Dockerfiles for production deployment and for local development, just as you already have different docker-compose.yml files.

But, if you wanted them both to be more similar, remember there is no harm on also using a virtualenv inside the production docker image, despite the trend of not doing so.

Solution 3:[3]

The second option is generally used in python environments. You just add new packages to requirements.txt and restart the container, which has a line with pip install -r requirements.txt in its dockerfile that do the installing.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Anwar Husain
Solution 2
Solution 3 Javad Zahiri