Setting up a multi-tiered log infrastructure Part 4 -- Elasticsearch Setup

  1. Setting up a multi-tiered log infrastructure Part 1 -- Getting Started
  2. Setting up a multi-tiered log infrastructure Part 2 -- System Overview
  3. Setting up a multi-tiered log infrastructure Part 3 -- System Build
  4. Setting up a multi-tiered log infrastructure Part 4 -- Elasticsearch Setup
  5. Setting up a multi-tiered log infrastructure Part 5 -- MongoDB Setup
  6. Setting up a multi-tiered log infrastructure Part 6 -- Graylog Setup
  7. Setting up a multi-tiered log infrastructure Part 7 -- Graylog WebUI Setup
  8. Setting up a multi-tiered log infrastructure Part 8 -- Rsyslog Setup
  9. Setting up a multi-tiered log infrastructure Part 9 -- Rsyslog HA Setup
  10. Setting up a multi-tiered log infrastructure Part 10 -- HA Cluster Setup
  11. Setting up a multi-tiered log infrastructure Part 11 -- Cluster Tuning

Setup Elasticsearch cluster nodes

Install Elasticsearch

In this example we are building out a three node cluster but this can scale up to fit whatever cluster size you choose. View Elasticsearch setup and configuration docs https://www.elastic.co/guide/en/elasticsearch/reference/2.4/index.html

Install Java

yum install java-1.8.0-openjdk-headless.x86_64

Import signing key from elastic.co

rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

Create repo file

vi /etc/yum.repos.d/Elasticsearch.repo

Insert this text

[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

Install elasticsearch

yum install elasticsearch

Set ES to start on boot

systemctl enable elasticsearch.service

Configure Elasticsearch

Edit the ES config before starting elasticsearch on the nodes

vi /etc/elasticsearch/elasticsearch.yml

Change the setting for cluster.name

set the name on all three nodes to graylog (it won’t work otherwise)

Change the setting for node.name

set to the individual hostname of the node

Change the setting for node.master

leave node.master = false and do not uncomment it on the node that will be used as the master node for the cluster. the default setting is to perform as a master. This node will also be used for the graylog server and web interface

set node.master = false on the other two nodes. These nodes will be used for data storage and shard replication only

Change the setting for node.data

set node.data = false on the node that will be used as the master node for the cluster. This node will not store data, as it will only function as the master.

set node.data = true on the other two nodes. These nodes will be used for data storage and shard replication only

Setup Note: the default configuration is setup for multicast. To disable multicast, make the next two changes.

Change the setting for discovery.zen.ping.multicast.enabled

uncomment discovery.zen.ping.multicast.enabled: false

Change the setting for discovery.zen.ping.unicast.hosts

set discovery.zen.ping.unicast.hosts: [“node-master- hostname”] this should be set to the name of the node that will function as the master node. Set on all three ES nodes

Configure firewalld rules

Now that the config file is edited, let’s make some firewall rule changes. if for some reason you aren’t using a firewall then you can skip this.

Configure a default zone with firewalld (The default zone is assumed to already be set as “Internal”)

Create a new service file for our new elasticsearch nodes

vi /etc/firewalld/services/es-transport.xml

Use this as the contents for es-transport.xml

<?xml version="1.0" encoding="utf-8"?>
  <service>
    <short>es-transport</short>
    <description>transport for elasticsearch nodes.</description>
    <port protocol="tcp" port="9300"/>
  </service>

Permanently create an selinux context label

semanage fcontext -a -t firewalld_etc_rw_t -s system_u /etc/firewalld/services/es-transport.xml

Apply the new selinux label

restorecon -vF /etc/firewalld/services/es-transport.xml

Add rich rules to allow connections from our other nodes. (this should be the list of all elasticsearch nodes that need to talk with each other)

firewall-cmd --zone=internal --add-rich-rule="rule family="ipv4" source address="xxx.xxx.xxx.xxx/32" service name="es-transport" accept" --permanent

Reload the current firewall config

firewall-cmd --reload

Check the interface and verify the services

firewall-cmd --zone=internal --list-services

Verify the config

Start elasticsearch on all of your clustered nodes. You will have to do this on each server.

systemctl start elasticsearch.service

Check that the nodes have created a cluster and then we can move on to the next step

curl ‘http://localhost:9200/_cluster/health?pretty’

You must be logged in to post a comment.

Proudly powered by WordPress   Premium Style Theme by www.gopiplus.com