Fork me on GitHub

ELK实践分享

ELK实践分享

ELK简介

ELK是三个开源项目的首字母缩写:Elasticsearch,Logstash和Kibana。Elasticsearch是一个搜索和分析引擎。Logstash是一个服务器端数据处理管道,它同时从多个源中提取数据,对其进行转换,然后将其发送到像Elasticsearch这样的存储。Kibana允许用户使用Elasticsearch中的图表和图形来可视化数据。

实践说明

  • 服务器使用两台4核8GB阿里云主机
    • 10.2.1.200 Filebeat、Logstash、Elasticsearch、Kibana
    • 10.2.1.201 Kafka
  • 所有应用部署使用ansible-playbook方式实现

软件版本

  • filebeat-7.3.1-1.x86_64
  • kafka_2.11-2.1.1
  • elasticsearch-7.3.1-1.x86_64
  • logstash-7.3.1-1.noarch
  • kibana-7.3.1-1.x86_64

实践架构

  • Filebeat–>Kafka–>Logstash–>Elasticsearch–>Kibana(使用X-Pack保障集群安全)

Elasticsearch部署

安装JDK及设置环境变量

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# rpm -ivh jdk-8u161-linux-x64.rpm
# cat>>/etc/profile<<EOF

# JAVA bin PATH setup
export JAVA_HOME=/usr/java/jdk1.8.0_161/
export CLASSPATH=.:\$JAVA_HOME/jre/lib/rt.jar:\$JAVA_HOME/lib/dt.jar:\$JAVA_HOME/lib/tools.jar
export PATH=\$PATH:\$JAVA_HOME/bin
EOF
source /etc/profile

# cat>>~/.bashrc<<EOF

# JAVA bin PATH setup
export JAVA_HOME=/usr/java/jdk1.8.0_161/
export CLASSPATH=.:\$JAVA_HOME/jre/lib/rt.jar:\$JAVA_HOME/lib/dt.jar:\$JAVA_HOME/lib/tools.jar
export PATH=\$PATH:\$JAVA_HOME/bin
EOF
source ~/.bashrc

Elasticsearch安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# rpm -ivh elasticsearch-7.3.1-x86_64.rpm
# grep -vE "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: version_test
path.data:
- /data0/elasticsearch/data
path.logs: /data0/elasticsearch/logs
network.host: 10.2.1.201
http.port: 9200
discovery.seed_hosts:
- 10.2.1.201
cluster.initial_master_nodes: ['10.2.1.201']
bootstrap.system_call_filter: false
node.attr.zone: cn-hangzhou
node.attr.box_type: hot
path.repo:
- "/es-ossfs/snapshots"
http.max_initial_line_length: 8k
http.max_header_size: 16k
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack.monitoring.enabled: true

# systemctl start elasticsearch.service
# systemctl enable elasticsearch.service

配置TLS和身份验证

第一件事是生成证书,通过这些证书便能允许节点安全地通行。我们这里使用elasticsearch-certutil命令。

  • 步骤1:Elasticsearch master node配置TLS
    1
    2
    3
    4
    5
    6
    7
    8
    9
    # /usr/share/elasticsearch/bin/elasticsearch-certutil --help
    # /usr/share/elasticsearch/bin/elasticsearch-certutil cert -out config/elastic-certificates.p12 -pass ""
    # cd /etc/elasticsearch && cp /usr/share/elasticsearch/config/elastic-certificates.p12 .
    # vim /etc/elasticsearch/elasticsearch.yml
    xpack.security.enabled: true
    xpack.security.transport.ssl.enabled: true
    xpack.security.transport.ssl.verification_mode: certificate
    xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
    xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

修改配置文件后,便可以重启主节点

  • 步骤2:Elasticsearch集群密码

    主节点重启后,便可以为集群设置密码了。 使用命令/usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto生成随机密码或使用/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive命令手动定义密码。记录这些密码。

Kibana部署

1
2
3
4
5
6
7
8
9
# rpm -ivh kibana-7.3.1-x86_64.rpm
# grep -vE "^$|^#" /etc/kibana/kibana.yml
server.host: "10.2.1.201"
elasticsearch.hosts: ["http://10.2.1.201:9200"]
elasticsearch.username: "elastic"
elasticsearch.password: "xxx"
xpack.monitoring.enabled: true
# systemctl start kibana.service
# systemctl enable kibana.service


kafka部署(单节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# tar -zxf kafka_2.11-2.1.1.tgz /usr/local/
# ln -s /usr/local/kafka_2.11-2.1.1 kafka
# grep -vE "^$|^#" server.properties
broker.id=0
listeners=PLAINTEXT://10.2.1.200:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
# /usr/local/kafka/bin/zookeeper-server-start.sh -daemon /usr/local/kafka/config/zookeeper.properties
# /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
# netstat -tunlp|grep java
tcp 0 0 10.2.1.200:9092 0.0.0.0:* LISTEN 11427/java
tcp 0 0 0.0.0.0:2181 0.0.0.0:* LISTEN 10716/java

filebeat部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# rpm -ivh filebeat-7.3.1-x86_64.rpm
# grep -vE "^$|^#" /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 10s
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
setup.kibana:
host: "10.2.1.201:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
output.kafka:
enabled: true
hosts: ["10.2.1.200:9092"]
topic: 'logs'
partition.round_robin:
reachable_only: false

required_acks: 1
compression: gzip
max_message_bytes: 1000000
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644

Logstash部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# rpm -ivh logstash-7.3.1.rpm
# grep -vE "^$|^#" /etc/logstash/logstash.yml
path.data: /var/lib/logstash
http.host: "10.2.1.201"
path.logs: /var/log/logstash
# grep -vE "^$|^#" /etc/logstash/conf.d/logstash-input-kafka.conf
input {
kafka {
bootstrap_servers => ["10.2.1.200:9092"]
auto_offset_reset => "latest"
consumer_threads => 5
topics => ["logs"]
decorate_events => true
codec => "json"
}
}
output {
elasticsearch {
hosts => ["http://10.2.1.201:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
user => "elastic"
password => "xxx"
}
}

验证中间环节工作正常

filebeat到kafka通信验证

1
2
3
4
# /usr/local/kafka/bin/kafka-topics.sh --list --zookeeper localhost:2181    确保存在topic名为logs
__consumer_offsets
logs
test

kafka到logstash

1
2
3
4
# /usr/local/kafka/bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 10.2.1.200:9092 --topic logs --time -1
logs:0:10022
# /usr/local/kafka/bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 10.2.1.200:9092 --topic logs --time -2
logs:0:0

端到端验证日志

1
2
3
4
5
6
7
8
在安装filebeat的主机上执行三次生成三条日志
# echo "111" >> /var/log/yum.log

验证kafka消费记录
# /usr/local/kafka/bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 10.2.1.200:9092 --topic logs --time -1
logs:0:10025
# /usr/local/kafka/bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 10.2.1.200:9092 --topic logs --time -2
logs:0:0

kibana验证日志数据

关于X-Pack几点说明

  • 在7.x版本中X-Pack已经内置,无需单独安装
  • 启用X-Pack后需要掌握Kibana中配置基于角色的访问控制(RBAC)
  • 启用X-Pack后使用API的方法
    1
    2
    # curl -u elastic:xxx "http://10.2.1.201:9200/_cluster/health?pretty"  # 无交互调用API
    # curl -X GET -u elastic '10.2.1.201:9200/_xpack/security/role' # 交互式调用API

小结

  • Elasticsearch权限解决方案
    • 1、X-Pack
    • 2、SearchGuard插件

ELK暂时介绍到这里,有任何问题,可以留言。

======================================================
希望各位朋友支持一下

本文作者:dongsheng
本文地址https://mds1455975151.github.io/archives/628093c8.html
版权声明:转载请注明出处!

坚持技术分享,您的支持将鼓励我继续创作!