'Connecting filebeat to elasticsearch using docker: Connection refused
I am new to the forum as well as elk stack. I tried setting up the elk stack using docker.I was successful and then I added filebeat to the compose file. Ever since, the filebeat is causing a problem connecting to other containers. Initially, I configured it to send logs to logstash but after all the troubleshooting, when I still was not able to make filebeat talk to logstash, I completely removed logstash and tried connecting filebeat directly with elasticsearch. After doing everything I could , when I check for the logs of the filebeat container , all I see is "Failed to connect to backoff(elasticsearch(http://elasticsearch:9200)): Get "http://elasticsearch:9200": dial tcp 192.168.128.2:9200: connect: connection refused" When I check for indices in elasticsearch , I see filebeat there. Also I'm able to ping to elasticsearch from within the filebeat container(Same with logstash, I was able to ping logstash from within).
Even after removing logstash from the stack, I still see logstash indices in elasticsearch. No idea why!
Please guide me as to where I'm going wrong.Any help would be appreciated. Thanks in advance!
This is my filebeat.yml
``` filebeat.inputs:
- type: docker
containers:
path: "/usr/share/dockerlogs/data"
stream: "stdout"
ids:
- "*"
cri.parse_flags: true
combine_partial: true
exclude_files: ['\.gz$']
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
#output.logstash:
#hosts: ["logstash:5044"]
output.elasticsearch:
hosts: ["elasticsearch:9200"]
username: elastic
password: changeme
log files:
logging.level: error
logging.to_files: false
logging.to_syslog: false
logging.metrics.enabled: false
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
ssl.verification_mode: none
setup.kibana:
host: "kibana:5601"
```
This is my elasticsearch.yml
``` ---
## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/master/distribution/ docker/src/docker/
## config/elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
http.port: 9200
#network.host: 142.93.218.7
## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html
#
xpack.license.self_generated.type: basic
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
```
This is my docker-compose.yml:
version: '3.2'
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks.
# see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- elk
#logstash:
#build:
#context: logstash/
#args:
#ELK_VERSION: $ELK_VERSION
#volumes:
#- type: bind
#source: ./logstash/config/logstash.yml
#target: /usr/share/logstash/config/logstash.yml
#read_only: true
#- type: bind
#source: ./logstash/pipeline
#target: /usr/share/logstash/pipeline
#read_only: true
#ports:
#- "5000:5000"
#- "9600:9600"
#- "5044:5044"
#environment:
#LS_JAVA_OPTS: "-Xmx256m -Xms256m"
#depends_on:
#- elasticsearch
#networks:
#- elk
filebeat:
build:
context: filebeat-docker/
# args:
# ELK_VERSION: $ELK_VERSION
# Run as 'root' instead of 'metricbeat' (uid 1000) to allow reading
# 'docker.sock' and the host's filesystem.
user: root
#ports:
#- "5044:5044"
command:
# Log to stderr.
- -e
# Disable config file permissions checks. Allows mounting
# 'config/metricbeat.yml' even if it's not owned by root.
# see: https://www.elastic.co/guide/en/beats/libbeat/current/config-file-permissions.html
- --strict.perms=false
# Mount point of the host’s filesystem. Required to monitor the host
# from within a container.
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /var/lib/docker/containers:/usr/share/dockerlogs/data:ro
#- /var/lib/docker:/var/lib/docker:ro
networks:
- elk
depends_on:
#- logstash
- elasticsearch
- kibana
kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./kibana/config/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
external: true
These are the filebeat container logs:
2021-07-26T08:54:34.833Z ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(http://elasticsearch:9200)): Get "http://elasticsearch:9200": dial tcp 192.168.128.2:9200: connect: connection refused
2021-07-26T08:54:56.139Z ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(http://elasticsearch:9200)): Get "http://elasticsearch:9200": dial tcp 192.168.128.2:9200: connect: connection refused
Solution 1:[1]
Did you try setting network.host in your elasticsearch.yml to 127.0.0.1 (in case you have everything on the same machine) or to your machine's IP ?
Solution 2:[2]
In the elasticsearch.yml you can set the hostname with network.host: localhost.
Set it to the IP address of your Host or the name of your docker service
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | shell |
Solution 2 | Blob |