2017-02-15 75 views
2

我在同一台机器上有Elasticsearh,LogstashBeat/filebeat节拍和Logstash - 由同级重置的连接

Filebeat被配置为发送信息至localhost:5043Logstash具有监听端口5043的管道配置。

如果我跑netstat -tuplen我看到:

[[email protected] bin]# netstat -tuplen | grep 5043 
tcp6  0  0 :::5043     :::*     LISTEN  994  147016  31435/java 

这意味着logstash加载管道,并正在侦听预期的端口上。

如果我telnetlocalhost和端口5043

[[email protected] bin]# telnet localhost 5043 
Trying ::1... 
Connected to localhost. 
Escape character is '^]'. 
^CConnection closed by foreign host. 
[[email protected] bin]# 

这意味着该端口是开放的。

然而,当我读filebeat的日志,我看到:

2017-02-15T17:35:32+01:00 INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat] 
2017-02-15T17:35:32+01:00 INFO Setup Beat: filebeat; Version: 5.2.1 
2017-02-15T17:35:32+01:00 INFO Loading template enabled. Reading template file: /etc/filebeat/filebeat.template.json 
2017-02-15T17:35:32+01:00 INFO Loading template enabled for Elasticsearch 2.x. Reading template file: /etc/filebeat/filebeat.template-es2x.json 
2017-02-15T17:35:32+01:00 INFO Elasticsearch url: http://localhost:5043 
2017-02-15T17:35:32+01:00 INFO Activated elasticsearch as output plugin. 
2017-02-15T17:35:32+01:00 INFO Publisher name: elk.corp.ncr 
2017-02-15T17:35:32+01:00 INFO Flush Interval set to: 1s 
2017-02-15T17:35:32+01:00 INFO Max Bulk Size set to: 50 
2017-02-15T17:35:32+01:00 INFO filebeat start running. 
2017-02-15T17:35:32+01:00 INFO No registry file found under: /var/lib/filebeat/registry. Creating a new registry file. 
2017-02-15T17:35:32+01:00 INFO Loading registrar data from /var/lib/filebeat/registry 
2017-02-15T17:35:32+01:00 INFO States Loaded from registrar: 0 
2017-02-15T17:35:32+01:00 INFO Loading Prospectors: 1 
2017-02-15T17:35:32+01:00 INFO Starting Registrar 
2017-02-15T17:35:32+01:00 INFO Start sending events to output 
2017-02-15T17:35:32+01:00 INFO Prospector with previous states loaded: 0 
2017-02-15T17:35:32+01:00 INFO Loading Prospectors completed. Number of prospectors: 1 
2017-02-15T17:35:32+01:00 INFO All prospectors are initialised and running with 0 states to persist 
2017-02-15T17:35:32+01:00 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s 
2017-02-15T17:35:32+01:00 INFO Starting prospector of type: log 
2017-02-15T17:35:32+01:00 INFO Harvester started for file: /tmp/logstash-tutorial.log 
2017-02-15T17:35:32+01:00 INFO Harvester started for file: /tmp/yum.log 
2017-02-15T17:35:38+01:00 ERR Connecting error publishing events (retrying): Get http://localhost:5043: read tcp [::1]:40240->[::1]:5043: read: connection reset by peer 

并重复消息2017-02-15T17:35:41+01:00 ERR Connecting error publishing events (retrying): Get http://localhost:5043: read tcp 127.0.0.1:39214->127.0.0.1:5043: read: connection reset by peer广告生厌。

我错过了房间里的一只大象?为什么连接“通过同级重置”?


pipeline.conf

input { 
    beats { 
     port => "5043" 
    } 
} 
# The filter part of this file is commented out to indicate that it is 
# optional. 
# filter { 
# 
# } 
output { 
    stdout { codec => rubydebug } 
} 

filebeat.yml

###################### Filebeat Configuration Example ######################### 

# This file is an example configuration file highlighting only the most common 
# options. The filebeat.full.yml file from the same directory contains all the 
# supported options with more comments. You can use it as a reference. 
# 
# You can find the full configuration reference here: 
# https://www.elastic.co/guide/en/beats/filebeat/index.html 

#=========================== Filebeat prospectors ============================= 

filebeat.prospectors: 

# Each - is a prospector. Most options can be set at the prospector level, so 
# you can use different prospectors for various configurations. 
# Below are the prospector specific configurations. 

- input_type: log 

    # Paths that should be crawled and fetched. Glob based paths. 
    paths: 
    - /tmp/*.log 
    #- c:\programdata\elasticsearch\logs\* 

    # Exclude lines. A list of regular expressions to match. It drops the lines that are 
    # matching any regular expression from the list. 
    #exclude_lines: ["^DBG"] 

    # Include lines. A list of regular expressions to match. It exports the lines that are 
    # matching any regular expression from the list. 
    #include_lines: ["^ERR", "^WARN"] 

    # Exclude files. A list of regular expressions to match. Filebeat drops the files that 
    # are matching any regular expression from the list. By default, no files are dropped. 
    #exclude_files: [".gz$"] 

    # Optional additional fields. These field can be freely picked 
    # to add additional information to the crawled log files for filtering 
    #fields: 
    # level: debug 
    # review: 1 

    ### Multiline options 

    # Mutiline can be used for log messages spanning multiple lines. This is common 
    # for Java Stack Traces or C-Line Continuation 

    # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [ 
    #multiline.pattern: ^\[ 

    # Defines if the pattern set under pattern should be negated or not. Default is false. 
    #multiline.negate: false 

    # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern 
    # that was (not) matched before or after or as long as a pattern is not matched based on negate. 
    # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash 
    #multiline.match: after 


#================================ General ===================================== 

# The name of the shipper that publishes the network data. It can be used to group 
# all the transactions sent by a single shipper in the web interface. 
#name: 

# The tags of the shipper are included in their own field with each 
# transaction published. 
#tags: ["service-X", "web-tier"] 

# Optional fields that you can specify to add additional information to the 
# output. 
#fields: 
# env: staging 

#================================ Outputs ===================================== 

# Configure what outputs to use when sending the data collected by the beat. 
# Multiple outputs may be used. 

#-------------------------- Elasticsearch output ------------------------------ 
output.elasticsearch: 
    # Array of hosts to connect to. 
    #hosts: ["localhost:9200"] 

    # Optional protocol and basic auth credentials. 
    #protocol: "https" 
    #username: "elastic" 
    #password: "changeme" 

#----------------------------- Logstash output -------------------------------- 
#output.logstash: 
    # The Logstash hosts 
    hosts: ["localhost:5043"] 

    # Optional SSL. By default is off. 
    # List of root certificates for HTTPS server verifications 
    #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] 

    # Certificate for SSL client authentication 
    #ssl.certificate: "/etc/pki/client/cert.pem" 

    # Client Certificate Key 
    #ssl.key: "/etc/pki/client/cert.key" 

#================================ Logging ===================================== 

# Sets log level. The default log level is info. 
# Available log levels are: critical, error, warning, info, debug 
#logging.level: debug 

# At debug level, you can selectively enable logging only for some components. 
# To enable all selectors use ["*"]. Examples of other selectors are "beat", 
# "publish", "service". 
#logging.selectors: ["*"] 
+1

您使用的是节拍输入? –

+1

您可以显示Logstash(/etc/logstash/conf.d/filename.conf)和Filebeat(/etc/filebeat/filebeat.yml)配置的相关部分(如果需要,可进行编辑)? – Signus

+0

是的,我正在使用节拍输入。 @Signus我的Logstash管道是可以在[弹性网站](https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html)上找到的管道。我将粘贴这两个作为这个问题的编辑。 – Navarro

回答

6

我发现:

#-------------------------- Elasticsearch output ------------------------------ 
output.elasticsearch: 
... 
... 

#----------------------------- Logstash output -------------------------------- 
#output.logstash: 
... 
... 

,我应该有:

#-------------------------- Elasticsearch output ------------------------------ 
#output.elasticsearch: 
... 
... 

#----------------------------- Logstash output -------------------------------- 
output.logstash: 
... 
... 
+0

为我工作....;) –

+0

这是正确的答案。 – Kingname

2

获取http://localhost:5043

这表明您的filebeat配置和Logstash配置为侦听的内容不同步。 Logstash has a beats {} input专门设计用作节拍连接的服务器。默认端口是5044.在节拍端,需要使用the Logstash Output连接到该服务器。这样做可以确保双方都说同一种语言,而这种错误表明情况并非如此。

+0

我在两个元素上设置了端口5043,Beats和Logstash的输入。 – Navarro

0

在你Filebeat配置尝试改变TLS来SSL。见破裂的名单变化

0

在我来说,我是缺少logstash模板选项 output.logstash: hosts: ["localhost:5044"] template.enabled: true template.path: "/etc/filebeat/filebeat.template.json" index: "filebeat"