Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to 6.2, Maintain 5.6+ Branch #66

Closed
sadsfae opened this issue Feb 22, 2018 · 8 comments
Closed

Upgrade to 6.2, Maintain 5.6+ Branch #66

sadsfae opened this issue Feb 22, 2018 · 8 comments

Comments

@sadsfae
Copy link
Owner

sadsfae commented Feb 22, 2018

This should track the upgrade of the playbook to the latest 6.x series of Elasticsearch, Logstash and Kibana.

I am in the process of getting ELK 6.2.x+ to work but have hit a few bugs. I've managed to workaround a few of them and I will go back and file bug reports but stuck on one particular logstash issue dealing with netty_tcnative and logstash crypto:

logstash-plugins/logstash-input-beats#188
elastic/logstash-docker#77
logstash-plugins/logstash-input-beats#288

This relates to the beats plugin and use of encryption ciphers as we auto-create and deploy x509 SSL certificates for the elk_client role so people can easily ship their logs encrypted over the wire to logstash.

I have not yet gotten to upgrading Fluentd and testing that or X-PACK on 6.2+ because of the above LS issues, everything else I've run into just seem like packaging issues (missing directories from RPM installation etc) and I'll file bugs for those upstream.

@sadsfae sadsfae self-assigned this Feb 22, 2018
@sadsfae
Copy link
Owner Author

sadsfae commented Feb 22, 2018

Once I have gotten through the issues above, or relevant fixes have been merged upstream and the 6.x can be deployed in a repeatable, idempotent fashion without breaking any of the other features/functionality we currently have with 5.6.x (current master) I plan on the following changes:

  • Make the current 5.6+ branch 5.6
  • Make 6.2.x the new master.

The final landscape should look like this:

  • 2.4 (legacy 2.4 branch, no new features)
  • 5.6 (5.6+ ELK)
  • master (latest 6.x ELK).

@sadsfae
Copy link
Owner Author

sadsfae commented Apr 5, 2018

Update here, this is reported to be fixed in logstash-input-tcp plugins upstream:

logstash-plugins/logstash-input-tcp#113

I have not gotten a chance to resume this yet due to travel and family issues.

@jimb0bmij
Copy link

How is this coming along? Have you updated to version 6.x of ELK?

@sadsfae
Copy link
Owner Author

sadsfae commented Aug 13, 2018 via email

@devantoine
Copy link

Hi,

Any update on this? Do you need help?

@sadsfae
Copy link
Owner Author

sadsfae commented Nov 15, 2018

Hi,

Any update on this? Do you need help?

Hey @devantoine thanks for offering I've just been really busy. I am already about 90% there but had to pause due to other activities going on in and out of work. I have started to pick this back up again though! I don't think it'll be much longer.

sadsfae added a commit that referenced this issue Nov 15, 2018
We will be moving to the 6.x ELK series and master branch
will become 6.x series.  What is currently master will become
the 5.6 branch similar to what we've done for 2.4.

Fixes:

I've made some changes to yum with_items as this will be deprecated
in Ansible 2.11+.
Curator should be working again for 5.6, thanks @crossan007
for filing an issue about it.

Fixes: #71
Related-to: #66
sadsfae added a commit that referenced this issue Nov 15, 2018
Master branch will become a 6.x maintained ELK/EFK
branch in the near future, this will sync 5.6 for the
time-being for folks who might be installing it.

Related-to: #66
@sadsfae
Copy link
Owner Author

sadsfae commented Nov 16, 2018

So an update here, I'm hitting an issue with filebeat and pushing simple logs on logstash-6.5.0-1 and I've run out of time for now to debug. It looks similar to this issue however.

This caused by a breaking change introduced into filebeat 6.3.0 in the host namespace that causes a mapping conflict.

(filebeat client logs

----
2018-11-16T01:55:46.754Z	INFO	cfgfile/reload.go:205	Loading of config files completed.
2018-11-16T01:55:46.760Z	INFO	log/harvester.go:254	Harvester started for file: /var/log/messages
2018-11-16T01:55:46.766Z	INFO	log/harvester.go:254	Harvester started for file: /var/log/audit/audit.log
2018-11-16T01:55:46.900Z	INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":29}},"total":{"ticks":150,"time":{"ms":160},"value":150},"user":{"ticks":130,"time":{"ms":131}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":10},"info":{"ephemeral_id":"2d34a383-e086-4cb4-adfd-ca28778f08e8","uptime":{"ms":33019}},"memstats":{"gc_next":4194304,"memory_alloc":2792360,"memory_total":12400424,"rss":20815872}},"filebeat":{"events":{"active":6,"added":19,"done":13},"harvester":{"open_files":2,"running":2,"started":2}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"type":"logstash"},"pipeline":{"clients":1,"events":{"active":6,"filtered":13,"published":6,"total":19}}},"registrar":{"states":{"current":14,"update":13},"writes":{"success":13,"total":13}},"system":{"cpu":{"cores":2},"load":{"1":0,"15":0.05,"5":0.01,"norm":{"1":0,"15":0.025,"5":0.005}}}}}}
2018-11-16T01:55:47.767Z	INFO	pipeline/output.go:95	Connecting to backoff(async(tcp://192.168.122.81:5044))
2018-11-16T01:55:47.862Z	INFO	pipeline/output.go:105	Connection to backoff(async(tcp://192.168.122.81:5044)) established

(logstash logs)

[2018-11-16T01:59:58,539][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.11.16", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x58b3b12f>], :response=>{"index"=>{"_index"=>"logstash-2018.11.16", "_type"=>"doc", "_id"=>"AtU_GmcBrJCZX05uC40T", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:331"}}}}}
[2018-11-16T01:59:58,539][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.11.16", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x56e4e69f>], :response=>{"index"=>{"_index"=>"logstash-2018.11.16", "_type"=>"doc", "_id"=>"A9U_GmcBrJCZX05uC40T", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:323"}}}}}

filebeat.yml config

filebeat.inputs:

- type: log
  enabled: true

  paths:
    - /var/log/messages
    - /var/log/*/*.log

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 3

setup.dashboards.enabled: true
setup.dashboards.beat:

setup.kibana:
  host: "192.168.122.81:80"
  protocol: "http"
  username: "admin"
  password: "admin"
output.logstash:
  hosts: ["192.168.122.81:5044"]

  ssl.enabled: true
  enabled: true  
  ssl:  
     certificate_authorities: ["/etc/beat/beat-forwarder.crt"]
     certificate: "/etc/beat/beat-forwarder.crt"
     key: "/etc/beat/beat-forwarder.key"

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

logstash.conf

input {
  tcp {
    port => 5000
    type => syslog
  }
  udp {
    port => 5000
    type => syslog
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
elasticsearch { hosts => ["localhost:9200"] }
}

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => ["/usr/share/logstash/beat-forwarder.crt"]
    ssl_key => ["/usr/share/logstash/beat-forwarder.key"]
    ssl_verify_mode => none
  }
}

Supposedly this should fix it in logstash.conf

filter {
  mutate {
    remove_field => [ "[host]" ]
  }
  mutate {
    add_field => {
      "host" => "%{[beat][hostname]}"
    }
  }
}

@devantoine
Copy link

Well done, thank you sir!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants