Skip to content

Commit

Permalink
separate version into dedicated folder
Browse files Browse the repository at this point in the history
  • Loading branch information
eason lau committed Jan 12, 2018
1 parent c60943f commit 3676be9
Show file tree
Hide file tree
Showing 104 changed files with 3,927 additions and 0 deletions.
5 changes: 5 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
5.0.1/
5.3.1/
5.6.3/
6.0.0/

15 changes: 15 additions & 0 deletions 5.0.1/.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# ~/elasticstack/.env

# environment propertiese, for distinguishing
environment=prod

# elascticsearch-image, change data path in host machine, default as /use/data/
E_LOCAL_DATA_PATH=/usr/data/

# logstash-image
## If docker-compose file set network_mode as host, can access elasticsearch via localhost. For 3 components are in one machine, need to set as localhost
L_ELASTICSEARCH_HOST_ENV=localhost

# kibana-image
## If docker-compose file set network_mode as host, can access elasticsearch via localhost ip. For 3 components are in one machine, need to set as localhost ip : 127.0.0.1 or 0.0.0.0
K_ELASTICSEARCH_HOST_IP=127.0.0.1
127 changes: 127 additions & 0 deletions 5.0.1/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
# elasticstack
ELK : elasticsearch + logstash + kibana

* Version : 5.0.1 + 5.0.1 + 5.0.1
* Version : 5.1.1 + 5.1.1 + 5.1.1
* Version : 5.x + 5.x + 5.x

Forwarder : filebeat port 5044

### Prerequisite
* OS : Centos 7.x
* Docker engine > 1.12.x
* Docker-compose > 1.11.x

### Clone GIT folder under your user home

cd ~
git clone https://github.com/easonlau02/elasticstack.git

## All 3 components(ELK) in one machine

#### Change below `~/elasticstack/.env` file only
# ~/elasticstack/.env

# environment propertiese, for distinguishing
environment=prod

# elascticsearch-image, change data path in host machine, default as /use/data/
E_LOCAL_DATA_PATH=/usr/data/

# logstash-image
## If docker-compose file set network_mode as host, can access elasticsearch via localhost. For 3 components are in one machine, need to set as localhost
L_ELASTICSEARCH_HOST_ENV=localhost

# kibana-image
## If docker-compose file set network_mode as host, can access elasticsearch via localhost ip. For 3 components are in one machine, need to set as localhost ip : 127.0.0.1 or 0.0.0.0
K_ELASTICSEARCH_HOST_IP=127.0.0.1
#### For docker-compose file `~/elasticstack/docker-compose.yml`, no need chnage generally. You can see some field you config at above .env file will pass into container via docker-compose file.
version: '2'
services:
elasticsearch:
...
environment:
- env=${environment}
volumes:
...
- ${E_LOCAL_DATA_PATH}:/usr/share/elasticsearch/data
...
...
logstash:
image: eason02/logstash:5.0.1
...
environment:
- env=${environment}
...
...
kibana:
...
environment:
- env=${environment}
extra_hosts:
- "elasticsearchHost:*${K_ELASTICSEARCH_HOST_IP}*"
...
#### Startup ELK service at one machine
cd ~/elaasticstack
docker-compose up -d

## Not all 3 in one machine

#### Three .env you might change. But will simpler than before
`~/elasticstack/elasticsearch/.env`

# ~/elasticstack/elasticsearch/.env

# environment propertiese, for distinguishing
environment=prod

# elascticsearch-image, change data path in host machine, default as /use/data/
E_LOCAL_DATA_PATH=/usr/data/

`~/elasticstack/logstash/.env`

# ~/elasticstack/logstash/.env

# environment propertiese, for distinguishing
environment=prod

# logstash-image
## Specify elasticsearch host, if the same machine, set it as localhost, if not, set specific host name. To make sure no firewall to access elasticsearch:9200
L_ELASTICSEARCH_HOST_ENV=localhost

`~/elasticstack/kibna/.env`

# ~/elasticstack/kibana/.env

# environment propertiese, for distinguishing
environment=prod

# for kibana-image
## Specify elasticsearch host, if the same machine, set it as 127.0.0.0 or 0.0.0.0, if not, set specific host IP. To make sure no firewall to access elasticsearchIP:9200
K_ELASTICSEARCH_HOST_IP=127.0.0.1

#### For docker-compose, every component will hold one, in order to start it single at one machine.
* [`~/elasticstack/elascticsearch/docker-compose.yml`](https://github.com/easonlau02/elasticstack/blob/master/elasticsearch/docker-compose.yml)
* [`~/elasticstack/logstash/docker-compose.yml`](https://github.com/easonlau02/elasticstack/blob/master/logstash/docker-compose.yml)
* [`~/elasticstack/kibana/docker-compose.yml`](https://github.com/easonlau02/elasticstack/blob/master/kibana/docker-compose.yml)
#### Startup ELK service at corresponding machine respectively.
**Elasticsearch at host1**:

cd ~/elasticstack/elasticsearch
docker-compose up -d

**Logstash at host2**:

cd ~/elasticstack/logstash
docker-compose up -d

**Kibana at host3**:

cd ~/elasticstack/kibana
docker-compose up -d

## Access kibana via `<kibanahost>:5601`, you can see below screenshot
![alt text](https://raw.githubusercontent.com/easonlau02/elasticstack/master/kibana_up.png "kibana_up")

You can see **Unable to fetch mapping. Do you have indices match...**, caused by no log feed.
57 changes: 57 additions & 0 deletions 5.0.1/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
version: '2'
services:
elasticsearch:
image: eason02/elasticsearch:5.0.1
container_name: elasticsearch-image
restart: always
network_mode: host
environment:
- env=${environment}
ports:
- "9200:9200"
- "9300:9300"
volumes:
- ./elasticsearch/logs/:/usr/share/elasticsearch/logs
- ${E_LOCAL_DATA_PATH}:/usr/share/elasticsearch/data
- ./elasticsearch/config/:/usr/share/elasticsearch/config/
logging:
driver: json-file
options:
max-file: '5'
max-size: 10m
logstash:
image: eason02/logstash:5.0.1
container_name: logstash-image
restart: always
network_mode: host
environment:
- env=${environment}
- L_ELASTICSEARCH_HOST=${L_ELASTICSEARCH_HOST_ENV}
ports:
- "5044:5044"
volumes:
- ./logstash/logs/:/var/log/logstash
- ./logstash/config/:/etc/logstash/
logging:
driver: json-file
options:
max-file: '5'
max-size: 10m
kibana:
image: eason02/kibana:5.0.1
container_name: kibana-image
restart: always
network_mode: host
environment:
- env=${environment}
extra_hosts:
- "elasticsearchHost:${K_ELASTICSEARCH_HOST_IP}"
ports:
- "5601:5601"
volumes:
- ./kibana/config/kibana.yml:/etc/kibana/kibana.yml
logging:
driver: json-file
options:
max-file: '5'
max-size: 10m
7 changes: 7 additions & 0 deletions 5.0.1/elasticsearch/.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# ~/elasticstack/elasticsearch/.env

# environment propertiese, for distinguishing
environment=prod

# elascticsearch-image, change data path in host machine, default as /use/data/
E_LOCAL_DATA_PATH=/usr/data/
51 changes: 51 additions & 0 deletions 5.0.1/elasticsearch/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# From basic elk env
FROM eason02/java:1.8

# Maintainer
MAINTAINER [email protected]

# Install gosu for step-down from root
RUN gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 && \
curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.9/gosu-amd64" && \
curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.9/gosu-amd64.asc" && \
gpg --verify /usr/local/bin/gosu.asc && \
rm /usr/local/bin/gosu.asc && \
rm -rf /root/.gnupg/ && \
chmod +x /usr/local/bin/gosu && \
gosu nobody true

# Install elasticsearch
RUN set -x && \
cd ~ && \
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch && \
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.0.1.rpm && \
sha1sum elasticsearch-5.0.1.rpm && \
rpm --install elasticsearch-5.0.1.rpm && \
mkdir -p /usr/share/elasticsearch/config && \
mkdir -p /usr/share/elasticsearch/data && \
mkdir -p /usr/share/elasticsearch/logs && \
chown elasticsearch:elasticsearch /usr/share/elasticsearch/logs

ENV PATH /usr/share/elasticsearch/bin:$PATH

# COPY ./config /usr/share/elasticsearch/config/

# RUN set -x && \
# ls -R /usr/share/elasticsearch/config && \
# cd /usr/share/elasticsearch/config && \
# chown root:elasticsearch -R *

EXPOSE 9200 9300

VOLUME /usr/share/elasticsearch/data
VOLUME /usr/share/elasticsearch/logs
VOLUME /usr/share/elasticsearch/config

WORKDIR /usr/share/elasticsearch/bin

COPY docker-entrypoint.sh /

RUN chmod +x /docker-entrypoint.sh

ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]
93 changes: 93 additions & 0 deletions 5.0.1/elasticsearch/config/elasticsearch.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <https://www.elastic.co/guide/en/elasticsearch/reference/5.0/settings.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
http.host: 0.0.0.0
#
# For more information, see the documentation at:
# <https://www.elastic.co/guide/en/elasticsearch/reference/5.0/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <https://www.elastic.co/guide/en/elasticsearch/reference/5.0/modules-discovery-zen.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <https://www.elastic.co/guide/en/elasticsearch/reference/5.0/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
Loading

0 comments on commit 3676be9

Please sign in to comment.