'How can I use the internal ip address of a container as an environment variable in Docker

I'm trying to get the IP address of my docker container as an environment variable within the container. Here is what I've tried:

When starting the container

docker run -dPi -e ip=`hostname -i` myDockerImage

When the container is already booted up

docker exec -it myDockerImage bash -c "export ip=`hostname -i`"

The problem with these two methods is that it uses the ip address of the host running the commands, not the docker container it's being run on.

So then I created a script inside the docker container that looks like this:

#!/bin/bash
export ip=`hostname -i`
echo $ip

And then run this with

docker exec -it myDockerImage bash -c ". ipVariableScript.sh"

When I add my_cmd which in my case is bash to the end of the script, it works in that one session of bash. I can't use it later in the files I need it in. I need to set it as an environment variable, not as a variable for one session.

So I already sourced it with the '.'. But it still won't echo when I'm in the container. If I put an echo $ip in the script, it will give me the correct IP address. But can only be used from within the script it's being set in.



Solution 1:[1]

Service names in Docker are more reliable and easier to use. However, here's

How to assign Docker guest IP to environment var inside guest

$ docker run -it ubuntu bash -c 'IP=$(hostname -i); echo ip=$IP'
ip=172.17.0.76

Solution 2:[2]

So, this is an old question but I ended up with the same question yesterday and my solution is this: use the docker internal option.

My containers were working fine but at some point the ip changed and I needed to change it on my docker-compose. Of course I can use the "docker network inspect my-container_default" and get my internal IP from that, but this also means changing my docker-compose every time the ip changes (and I'm still not that familiar with docker in order to detect IP changes automatically or make a more sofisticated config). So, I use the "host.docker.internal" flag. Now I no more need to check what's my IP from docker and everything is always connected.

Here an example of a node app which uses elastic search and needs to connect.

version: '3.7'

services:
  api:
    ...configs...
    depends_on:
      - 'elasticsearch'
    volumes:
      - ./:/usr/local/api
    ports:
      - '3000:80'
    links:
      - elasticsearch:els
    environment:
      - PORT=80
      - ELASTIC_NODE=http://host.docker.internal:9200
  elasticsearch:
    container_name: 'els'
    image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
    ...elastic search container configs...
    ports:
      - '9200:9200'
    expose:
      - 9200
    networks:
      - elastic

networks:
  elastic:
    driver: bridge

Note the "ELASTIC_NODE=http://host.docker.internal:9200" on api environments and the "network" that the elastic search container is using (on bridge mode)

This way you don't need to worry about knowing your IP.

Solution 3:[3]

The container name is postgres in this example. It is a bit clumsy, but it delivers.

container_ip=$(docker inspect postgres -f "{{json .NetworkSettings.Networks }}" \
  | awk -v FS=: '{print $9}' \
  | cut -f1 -d\, \
  | echo "${container_ip//\"}")

Make a function out of it:

#!/usr/bin/env bash

set -o errexit
set -o nounset
set -eu -o pipefail
#set -x
#trap read debug
    
#assign container ip address to variable
function get_container_ip () {
    container_ip=$(docker inspect "$1" -f "{{json .NetworkSettings.Networks }}" \
      | awk -v FS=: '{print $9}' \
      | cut -f1 -d\,)

    container_ip=$(echo "${container_ip//\"}")
}

get_container_ip $1
echo "$container_ip"

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 johntellsall
Solution 2
Solution 3