Setting up a multi-tiered log infrastructure Part 2 -- System Overview

System Build Overview

The next steps are to build the environment; starting with the elasticsearch (ES) nodes and the log parser/search frontend because they require certain components to be identical. The process assumes the use of a minimal OS install using CENTOS 7 but any major NIX based OS can be used (just remember the commands might be different). Start by building three servers; two will be ES data nodes and one will be used as the ES master node. The ES master node will also be where graylog and mongod are installed.

Node Details

  1. Endpoints (Windows, Linux etc.)
    • Application Dependencies
      1. Determined by platform
    • Inputs
      1. Logs generated locally by system and applications
    • Processing
      1. Different tools can be used depending on the platform. Some processing can be handled on the endpoint prior to shipping logs to the CLR
    • Outputs
      1. Output logs to the log aggregator server
  2. Central Log Aggregators (Two Nodes)
    • Application Dependencies
      1. rsyslog
    • Inputs
      1. Incoming streams from endpoints
    • Processing
      1. Check logs from remote source then forward to CLR
      2. Check logs from local source write to local file
    • Outputs
      1. Output logs in raw format to CLR
  3. Central Log Repository (One Node)
    • Application Dependencies
      1. rsyslog
    • Inputs
      1. Incoming stream from Central Log Aggregators
      2. (Optional) Incoming alerts/logs from log analysis server
    • Processing
      1. Forward to log parser server
      2. Check logs from remote source then write to local file
      3. (Optional) Check logs from LAS but do not forward back to LAS
      4. Check logs from local source write to local file
    • Outputs
      1. Output logs to log parser server for indexing
      2. Output raw logs to local file (%HOSTNAME%-YYYY-MM-DD.log)
      3. (Optional) Output Logs in raw format to log analysis server
  4. Log Parser and Search Frontend (One Node)
    • Application Dependencies
      1. Java – recommended openjdk v1.8 or later
      2. Elasticsearch – latest from ES repo v2.x
      3. Mongodb – latest from mongo repo v3.x
      4. Graylog-Server -- latest from sources v2.x
      5. (Optional) Apache – latest available in repo used for reverse proxy
    • Inputs
      1. Incoming streams from CLR
    • Processing
      1. Custom rules can be created to parse logs and create actions and alerts based on them
    • Outputs
      1. Output alerts to email server
      2. Outputs to elasticsearch backend for indexing and storage
  5. Storage Cluster (Two Nodes)
    • Application Dependencies
      1. Java – recommended openjdk v1.8 or later
      2. Elasticsearch – latest from ES repo v2.x
    • Inputs
      1. Incoming streams from log parser server
    • Processing
      1. No processing used for storage and shard replication
    • Outputs
      1. Standard endpoint logging to CLR
  6. Log Analysis Server (Optional)
    • Application Dependencies
      1. Apache – latest available in repo
      2. PHP – latest available in repo
      3. OSSEC- latest stable from sources v2.8.3 or later
    • Inputs
      1. Incoming streams from CLR via rsyslog
    • Processing
      1. Perform rule checks based off OSSEC rules
      2. Send alerts based on anomalies
    • Outputs
      1. Output alerts to email server
      2. Output logs to CLR for storage and routing to log parser server
  7. E-Mail Server (Notifications)
    • Application Dependencies
      1. Email Server
    • Inputs
      1. Allow relay of message from log parser server
      2. Allow relay of messages from LAS server
    • Processing
      1. Check destination address is local account
    • Outputs
      1. Email recipients listed as contact for specific alerts

You must be logged in to post a comment.

Proudly powered by WordPress   Premium Style Theme by www.gopiplus.com