'Error while importing torch inside a Docker image inside a VM
What I have:
I have set up an Ubuntu VM using Vagrant. Inside this VM, I want to build a Docker Image, which should run some services, which will be connected to some clients outside the VM. This structure is fixed and cannot be changed. One of the Docker images is using ML frameworks, namely tensorflow and pytorch. The source code to be executed inside the Docker image is bundled using pyInstaller. The building and bundling works perfectly. But, if I try to run the built Docker image, I get the following error message:
[1] WARNING: file already exists but should not: /tmp/_MEIl2gg3t/torch/_C.cpython-37m-x86_64-linux-gnu.so
[1] WARNING: file already exists but should not: /tmp/_MEIl2gg3t/torch/_dl.cpython-37m-x86_64-linux-gnu.so
['/tmp/_MEIl2gg3t/base_library.zip', '/tmp/_MEIl2gg3t/lib-dynload', '/tmp/_MEIl2gg3t']
[8] Failed to execute script '__main__' due to unhandled exception!
Traceback (most recent call last):
File "__main__.py", line 4, in <module>
File "PyInstaller/loader/pyimod03_importers.py", line 495, in exec_module
File "app.py", line 6, in <module>
File "PyInstaller/loader/pyimod03_importers.py", line 495, in exec_module
File "controller.py", line 3, in <module>
File "PyInstaller/loader/pyimod03_importers.py", line 495, in exec_module
File "torch/__init__.py", line 199, in <module>
ImportError: librt.so.1: cannot open shared object file: No such file or directory
Dockerfile
ARG PRJ=unspecified
ARG PYINSTALLER_ARGS=
ARG LD_LIBRARY_PATH_EXTENSION=
ARG PYTHON_VERSION=3.7
###############################################################################
# Stage 1: BUILD PyInstaller
###############################################################################
# Alpine:
#FROM ... as build-pyinstaller
# Ubuntu:
FROM ubuntu:18.04 as build-pyinstaller
ARG PYTHON_VERSION
# Ubuntu:
RUN apt-get update && apt-get install -y \
python$PYTHON_VERSION \
python$PYTHON_VERSION-dev \
python3-pip \
unzip \
# Ubuntu+Alpine:
libc-dev \
g++ \
git
# Make our Python version the default
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python$PYTHON_VERSION 1 && python3 --version
# Alpine:
#
# # Install pycrypto so --key can be used with PyInstaller
# RUN pip install \
# pycrypto
# Install PyInstaller
RUN python3 -m pip install --proxy=${https_proxy} --no-cache-dir \
pyinstaller
###############################################################################
# Stage 2: BUILD our service with Python and pyinstaller
###############################################################################
FROM build-pyinstaller
# Upgrade pip and setuptools
RUN python3 -m pip install --no-cache-dir --upgrade \
pip \
setuptools
# Install pika and protobuf here as they will be required by all our services,
# and installing in every image would take more time.
# If they should no longer be required everywhere, we could instead create
# with-pika and with-protobuf images and copy the required, installed libraries
# to the final build image (similar to how it is done in cpp).
RUN python3 -m pip install --no-cache-dir \
pika \
protobuf
# Add "worker" user to avoid running as root (used in the "run" image below)
# Alpine:
#RUN adduser -D -g "" worker
# Ubuntu:
RUN adduser --disabled-login --gecos "" worker
RUN mkdir -p /opt/export/home/worker && chown -R worker /opt/export/home/worker
ENV HOME /home/worker
# Copy /etc/passwd and /etc/group to the export directory so that they will be installed in the final run image
# (this makes the "worker" user available there; adduser is not available in "FROM scratch").
RUN export-install \
/etc/passwd \
/etc/group
# Create tmp directory that may be required in the runner image
RUN mkdir /opt/export/install/tmp && chmod ogu+rw /opt/export/install/tmp
# When using this build-parent ("FROM ..."), the following ONBUILD commands are executed.
# Files from pre-defined places in the local project directory are copied to the image (see below for details).
# Use the PRJ and MAIN_MODULE arguments that have to be set in the individual builder image that uses this image in FROM ...
ONBUILD ARG PRJ
ONBUILD ENV PRJ=embedded.adas.emergencybreaking
ONBUILD WORKDIR /opt/prj/embedded.adas.emergencybreaking/
# "prj" must contain all files that are required for building the Python app.
# This typically contains a requirements.txt - in this step we only copy requirements.txt
# so that "pip install" is not run after every source file change.
ONBUILD COPY pr[j]/requirements.tx[t] /opt/prj/embedded.adas.emergencybreaking/
# Install required python dependencies for our service - the result stored in a separate image layer
# which is used as cache in the next build even if the source files were changed (those are copied in one of the next steps).
ONBUILD RUN python3 -m pip install --no-cache-dir -r /opt/prj/embedded.adas.emergencybreaking/requirements.txt
# Install all linux packages that are listed in /opt/export/build/opt/prj/*/install-packages.txt
# and /opt/prj/*/install-packages.txt
ONBUILD COPY .placeholder pr[j]/install-packages.tx[t] /opt/prj/embedded.adas.emergencybreaking/
ONBUILD RUN install-build-packages
# "prj" must contain all files that are required for building the Python app.
# This typically contains a dependencies/lib directory - in this step we only copy that directory
# so that "pip install" is not run after every source file change.
ONBUILD COPY pr[j]/dependencie[s]/li[b] /opt/prj/embedded.adas.emergencybreaking/dependencies/lib
# .egg/.whl archives can contain binary .so files which can be linked to system libraries.
# We need to copy the system libraries that are linked from .so files in .whl/.egg packages.
# (Maybe Py)
ONBUILD RUN \
for lib_file in /opt/prj/embedded.adas.emergencybreaking/dependencies/lib/*.whl /opt/prj/embedded.adas.emergencybreaking/dependencies/lib/*.egg; do \
if [ -e "$lib_file" ]; then \
mkdir -p /tmp/lib; \
cd /tmp/lib; \
unzip $lib_file "*.so"; \
find /tmp/lib -iname "*.so" -exec ldd {} \; ; \
linked_libs=$( ( find /tmp/lib -iname "*.so" -exec get-linked-libs {} \; ) | egrep -v "^/tmp/lib/" ); \
export-install $linked_libs; \
cd -; \
rm -rf /tmp/lib; \
fi \
done
# Install required python dependencies for our service - the result is stored in a separate image layer
# which can be used as cache in the next build even if the source files are changed (those are copied in one of the next steps).
ONBUILD RUN \
for lib_file in /opt/prj/embedded.adas.emergencybreaking/dependencies/lib/*.whl; do \
[ -e "$lib_file" ] || continue; \
\
echo "python3 -m pip install --no-cache-dir $lib_file" && \
python3 -m pip install --no-cache-dir $lib_file; \
done
ONBUILD RUN \
for lib_file in /opt/prj/embedded.adas.emergencybreaking/dependencies/lib/*.egg; do \
[ -e "$lib_file" ] || continue; \
\
# Note: This will probably not work any more as easy_install is no longer contained in setuptools!
echo "python3 -m easy_install $lib_file" && \
python3 -m easy_install $lib_file; \
done
# Copy the rest of the prj directory.
ONBUILD COPY pr[j] /opt/prj/embedded.adas.emergencybreaking/
# Show what files we are working on
ONBUILD RUN find /opt/prj/embedded.adas.emergencybreaking/ -type f
# Create an executable with PyInstaller so that python does not need to be installed in the "run" image.
# This produces a lot of error messages like this:
# Error relocating /usr/local/lib/python3.8/lib-dynload/_uuid.cpython-38-x86_64-linux-gnu.so: PyModule_Create2: symbol not found
# If the reported functions/symbols are called from our python service, the missing dependencies probably have to be installed.
ONBUILD ARG PYINSTALLER_ARGS
ONBUILD ENV PYINSTALLER_ARGS=${PYINSTALLER_ARGS}
ONBUILD ARG LD_LIBRARY_PATH_EXTENSION
ONBUILD ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${LD_LIBRARY_PATH_EXTENSION}
ONBUILD RUN mkdir -p /usr/lib64 # Workaround for FileNotFoundError: [Errno 2] No such file or directory: '/usr/lib64' from pyinstaller
ONBUILD RUN \
apt-get update && \
apt-get install -y \
libgl1-mesa-glx \
libx11-xcb1 && \
apt-get clean all && \
rm -r /var/lib/apt/lists/* && \
echo "LD_LIBRARY_PATH=${LD_LIBRARY_PATH}" && \
echo "pyinstaller -p /opt/prj/embedded.adas.emergencybreaking/src -p /opt/prj/embedded.adas.emergencybreaking/dependencies/src -p /usr/local/lib/python3.7/dist-packages --hidden-import=torch --hidden-import=torchvision --onefile ${PYINSTALLER_ARGS} /opt/prj/embedded.adas.emergencybreaking/src/adas_emergencybreaking/__main__.py" && \
pyinstaller -p /opt/prj/embedded.adas.emergencybreaking/src -p /opt/prj/embedded.adas.emergencybreaking/dependencies/src -p /usr/local/lib/python3.7/dist-packages --hidden-import=torch --hidden-import=torchvision --onefile ${PYINSTALLER_ARGS} /opt/prj/embedded.adas.emergencybreaking/src/adas_emergencybreaking/__main__.py ; \
# Maybe we will need to add additional paths with -p ...
# Copy the runnable to our default location /opt/run/app
ONBUILD RUN mkdir -p /opt/run && \
cp -p -v /opt/prj/embedded.adas.emergencybreaking/dist/__main__ /opt/run/app
# Show linked libraries (as static linking does not work yet these have to be copied to the "run" image below)
#ONBUILD RUN get-linked-libs /usr/local/lib/libpython*.so.*
#ONBUILD RUN get-linked-libs /opt/run/app
# Add the executable and all linked libraries to the export/install directory
# so that they will be copied to the final "run" image
ONBUILD RUN export-install $( get-linked-libs /opt/run/app )
# Show what we have produced
ONBUILD RUN find /opt/export -type f
The requirements.txt, which is used to install my dependencies looks like this:
numpy
tensorflow-cpu
matplotlib
--find-links https://download.pytorch.org/whl/torch_stable.html
torch==1.11.0+cpu
--find-links https://download.pytorch.org/whl/torch_stable.html
torchvision==0.12.0+cpu
Is there anything obviously wrong here?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
