快捷搜索:
来自 67677新澳门手机版 2019-11-10 13:43 的文章
当前位置: 67677新澳门手机版 > 67677新澳门手机版 > 正文

集群日志系统布局及实践,ELK初学搭建

目录:根底思量

 Elasticstack 5.1.2 集群日志系统布局及实行

  1.  改善相关系统布局
  2. 安装elasticsearch

  3. 安装 kibana

  4. 安装logstash

  5. X-pack插件的安装

  6. 登入网页查看

一、ELK Stack简介

ELK名字解释

ELK便是ElasticSearch LogStash Kibana,那三者是主导套件,但并非全数。

  • Elasticsearch是个开源布满式寻找引擎,它的表征有:布满式,零配置,自动发掘,索引自动分片,索引别本机制,restful风格接口,好多据源,自动物检疫索负载等。

  • Logstash是一个截然开源的工具,他得以对您的日记进行募集、过滤,并将其积存供以后使用(如,寻找卡塔尔国。

  • Kibana 也是一个开源和无偿的工具,它Kibana可认为 Logstash 和 ElasticSearch 提供的日志解析自身的 Web 分界面,能够援救您汇总、剖析和找出首要数据日志。

ELK Stack 是Elasticsearch、Logstash、Kibana多少个开源软件的整合,在实时数据检索和深入分析地方,三者经常是合作共用的。

系统意况信息:

CentOS Linux release 7.3.1611 (Core) 

可参考:

底蕴条件希图:

闭馆防火墙:systemctl stop firewalld

SeLinux设为disabled: setenforce 0

jdk版本:jdk_1.8

此番搭建使用了多少个节点,分别是:node1(ElasticSearch LogStash Kibana

  • x-pack)

                node2(ElasticSearch x-pack)

                node3(ElasticSearch x-pack)

此番使用的安装包已经提前下载好了,如有供给活动去官方网址下载,官方下载地址:

$ll /apps/tools/
total 564892
-rw-r--r-- 1 root root  29049540 Feb 27 15:19 elasticsearch-6.2.2.tar.gz
-rw-r--r-- 1 root root  12382174 Jun  1 10:49 filebeat-6.2.2-linux-x86_64.tar.gz
-rw-r--r-- 1 root root  83415765 Feb 27 15:50 kibana-6.2.2-linux-x86_64.tar.gz
-rw-r--r-- 1 root root 139464029 Feb 27 16:13 logstash-6.2.2.tar.gz
-rw-r--r-- 1 root root 314129017 Jun  1 10:48 x-pack-6.2.2.zip

二、Elasticstack首要器件

 后生可畏、更正相关系统布置

1.  修改 /etc/security/limits.conf 文件,加多如下所示内容

es hard nofile 65536
es soft nofile 65536              # 最大文件句柄数
es soft memlock unlimited           # 内存锁不限制
es hard memlock unlimited
  1. 修改 /etc/sysctl.conf 文件,加多如下所示内容

    vm.max_map_count=262144            # 二个进度能享有的最多的内部存款和储蓄器区域

Elasticsearch: 准实时索引

二、安装elasticsearch

elasticsearch是这一次布置多个节点同不经常候安装,配置风度翩翩体等同

1.  解压安装包

tar xf elasticsearch-6.2.2.tar.gz

2.  改变配置文件 elasticsearch.yml

cluster.name: ctelk                                             # 集群名称,各个节点的集群名称都要一样
node.name: node-1                                               # 节点名称
bootstrap.memory_lock: true                                     # 是否允许内存swapping
network.host: IP                                                # 提供服务的ip,通常是本机ip
http.port: 9200                                                 # 服务端口
discovery.zen.ping.unicast.hosts: ["IP", "IP", "IP"]            # 服务发现,集群中的主机
discovery.zen.minimum_master_nodes: 2                           # 决定了有资格作为master的节点的最小数量,官方推荐N/2   1
gateway.recover_after_nodes: 3                                  # 少于三台的时候,recovery

3.  修改 jvm.options 配置

-Xms8g                                  # 最大内存
-Xmx8g                                  # 最小内存

  4. es必需用非root客户运行,所以大家在此其创设贰个普通顾客,用来管理es

groupadd es
useradd -g es es
chown –R es.es elasticsearch-6.2.2/
bin/elasticsearch –d

Logtash: 收罗数据,配置利用 Ruby DSL

三、安装 kibana

kibana安插在随便一个节点都足以,只必要多个。

1.  解压安装包

tar xf kibana-6.2.2-linux-x86_64.tar.gz

2.  校勘配置文件 kibana.yml

server.port: 5601                                                       # Kibana端口号
server.host: "IP"                                                       # KibanaIP
elasticsearch.url: "http://esIP:port"                                   # es的IP地址及端口号

3.  起步程序

./bin/kibana -l /apps/product/kibana-6.2.2-linux-x86_64/logs/kibana.log &                   # 自己创建一个logs目录用来记录日志

Kibana 展示数据,查询聚合,生成报表

四、安装logstash

logstash布置在大肆叁个节点都足以,只供给叁个。

1.  解压安装包

tar xf logstash-6.2.2.tar.gz

2.  起步程序

./bin/logstash -f /apps/product/logstash-6.2.2/config/logstash.conf &

卡夫卡新闻队列,做为日志接入的缓冲区

五、X-pack插件的设置

此次使用的安装包已经全副任何下载至当地,只须求离线安装就能够。

1.  es、kibana、logstatic安装x-pack

es安装x-pack,中途会要你筛选 y就行了。

./bin/elasticsearch-plugin install file:///apps/product/x-pack-6.2.2.zip                        # es安装插件
-> Downloading file:///apps/product/x-pack-6.2.2.zip
[=================================================] 100%   
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: plugin requires additional permissions @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
* java.io.FilePermission \.pipe* read,write
* java.lang.RuntimePermission accessClassInPackage.com.sun.activation.registries
* java.lang.RuntimePermission getClassLoader
* java.lang.RuntimePermission setContextClassLoader
* java.lang.RuntimePermission setFactory
* java.net.SocketPermission * connect,accept,resolve
* java.security.SecurityPermission createPolicy.JavaPolicy
* java.security.SecurityPermission getPolicy
* java.security.SecurityPermission putProviderProperty.BC
* java.security.SecurityPermission setPolicy
* java.util.PropertyPermission * read,write
See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
for descriptions of what these permissions allow and the associated risks.

Continue with installation? [y/N]y
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: plugin forks a native controller @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
This plugin launches a native controller that is not subject to the Java
security manager nor to system call filters.

Continue with installation? [y/N]y
Elasticsearch keystore is required by plugin [x-pack-security], creating...
-> Installed x-pack with: x-pack-core,x-pack-deprecation,x-pack-graph,x-pack-logstash,x-pack-ml,x-pack-monitoring,x-pack-security,x-pack-upgrade,x-pack-watcher

 此次下载的为未破解版本,须求破解,次破解进度由同事完结,那个时候秩序纠正已破解jar包就可以。

[root@dev161 product]# find ./ -name x-pack-core-6.2.2.jar 
./elasticsearch-6.2.2/plugins/x-pack/x-pack-core/x-pack-core-6.2.2.jar                    # 将下边已破解的 jar包替换过来即可
./x-pack-core-6.2.2.jar

es配置活动创立索引权限,在 elasticsearch.yml 文件中增添

action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*,*

 kibanak安装x-pack

./bin/kibana-plugin install file:///apps/product/x-pack-6.2.2.zip 
Attempting to transfer from file:///apps/product/x-pack-6.2.2.zip
Transferring 314129017 bytes....................
Transfer complete
Retrieving metadata from plugin archive
Extracting plugin archive
Extraction complete
Optimizing and caching browser bundles...
Plugin installation complete

logstash安装x-pack

./bin/logstash-plugin install file:///apps/product/x-pack-6.2.2.zip 
Installing file: /apps/product/x-pack-6.2.2.zip
Install successful

 2.  安装校订密码,第叁回初叶化使用setup-passwords interactive,之后改良使用setup-passwords auto

./binx-pack/setup-passwords interactive                                                                 # 初始化密码
Initiating the setup of passwords for reserved users elastic,kibana,logstash_system.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


Enter password for [elastic]:                                                                           # 修改es密码
Reenter password for [elastic]: 
Enter password for [kibana]:                                                                            # 修改kibana密码
Reenter password for [kibana]: 
Enter password for [logstash_system]:                                                                   # 修改logstash密码
Reenter password for [logstash_system]: 
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [elastic]

 3.  配置集群内部通信的TLS/SSL

生成CA文件:./bin/x-pack/certutil ca

./bin/x-pack/certutil ca
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.

Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority

By default the 'ca' mode produces a single PKCS#12 output file which holds:
    * The CA certificate
    * The CA's private key

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key

Please enter the desired output file [elastic-stack-ca.p12]: es-oldwang-ca.p12                                                         # 输出文件名称
Enter password for es-oldwang-ca.p12 :                                                            # 文件密码(123456)

  使用CA文件生成密钥文件: ./bin/x-pack/certutil cert --ca es-oldwang-ca.p12 

./certutil cert --ca es-oldwang-ca.p12 
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'cert' mode generates X.509 certificate and private keys.
    * By default, this generates a single certificate and key for use
       on a single instance.
    * The '-multiple' option will prompt you to enter details for multiple
       instances and will generate a certificate and key for each one
    * The '-in' option allows for the certificate generation to be automated by describing
       the details of each instance in a YAML file

    * An instance is any piece of the Elastic Stack that requires a SSL certificate.
      Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
      may all require a certificate and private key.
    * The minimum required value for each instance is a name. This can simply be the
      hostname, which will be used as the Common Name of the certificate. A full
      distinguished name may also be used.
    * A filename value may be required for each instance. This is necessary when the
      name would result in an invalid file or directory name. The name provided here
      is used as the directory name (within the zip) and the prefix for the key and
      certificate files. The filename is required if you are prompted and the name
      is not displayed in the prompt.
    * IP addresses and DNS names are optional. Multiple values can be specified as a
      comma separated string. If no IP addresses or DNS names are provided, you may
      disable hostname verification in your SSL configuration.

    * All certificates generated by this tool will be signed by a certificate authority (CA).
    * The tool can automatically generate a new CA for you, or you can provide your own with the
         -ca or -ca-cert command line options.

By default the 'cert' mode produces a single PKCS#12 output file which holds:
    * The instance certificate
    * The private key for the instance certificate
    * The CA certificate

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the instance certificate, the key and the CA certificate

If you elect to generate multiple instances certificates, the output will be a zip file
containing all the generated certificates

Enter password for CA (es-oldwang-ca.p12) :                                                          # 输入es-oldwang-ca.p12文件密码
Please enter the desired output file [elastic-certificates.p12]: es-oldwang.p12                      # 输出文件名称
Enter password for es-oldwang.p12 :                                                                  # 输入本文件密码

Certificates written to /apps/product/elasticsearch-6.2.2/bin/x-pack/es-oldwang.p12

This file should be properly secured as it contains the private key for 
your instance.

This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.

For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.

将转换的三个文本迁移至config目录下,创造新目录ssl

ll ssl/
total 8
-rw------- 1 es es 2524 Jun  4 18:53 es-oldwang-ca.p12
-rw------- 1 es es 3440 Jun  4 18:55 es-oldwang.p12

修改依次节点陈设文件 elasticsearch.yml ,将以下四行增加至文件末尾

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /apps/product/elasticsearch-6.2.2/config/ssl/es-oldwang.p12
xpack.security.transport.ssl.truststore.path: /apps/product/elasticsearch-6.2.2/config/ssl/es-oldwang.p12

将SSL证书新闻导入

./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
Enter value for xpack.security.transport.ssl.keystore.secure_password: 
./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
Enter value for xpack.security.transport.ssl.truststore.secure_password: 

4.  导入license文件

本次实验,license文件已经上传至服务器,贮存至es根目录,文件名:license.json

修改依次节点安插文件 elasticsearch.yml ,文件末尾加多,一视同仁启集群

xpack.security.enabled:false

导入license文件,要求elastic客户的密码,导入达成后会提醒导入成功。

curl -XPUT -u elastic 'http://10.20.88.161:9200/_xpack/license' -H "Content-Type: application/json" -d @license.json
Enter host password for user 'elastic':
{"acknowledged":true,"license_status":"valid"}

导入实现后注释掉配置文件elasticsearch.yml 中的,一碗水端平启集群

# xpack.security.enabled:false

三、Elasticstack工作流程

六、登陆网页查看

网页登陆集群查看

图片 1

改过kibana配置文件 kibana.yml,修正登陆顾客密码

elasticsearch.username: "elastic"                      # es用户
elasticsearch.password: "elastic"                      # 之前修改过的es密码

网页登陆查看kibana

图片 2

图片 3

 从kibana端也可以预知到,licence校订之后晚点岁月为2050年

图片 4图片 5图片 6

   彩蛋:

http://IP:9200/_cluster/health?pretty                     # 集群用户检查
http://IP:9200/_cat/health
http://10.20.88.161:9200/_cat/health?v

 

图片 7

大致表达:

1)日志机器上配备logstash服务,用于监察和控制并访问日志,然后,将收罗到的日志发送到broker上。

2)Indexer会将那一个日记搜罗到一起,统一发送到Elasticsearch上扩充仓储。

3)末了Kibana会将索要的多寡进行浮现,能够进行自定义寻找

四、意况计划

系统:centos 7.2

JDK: 1.8.0_111

filebeat: 5.1.2

logstash: 5.1.2

elasticsearch: 5.1.2  (注:ELK stack 5.1上述版本JDK必得是1.8以上)

kibana: 5.1.2

X-Pack:5.1

kafka: 2.11-0.10.1.0

 

测验服务器希图:

长机名称:node01   IP:192.168.2.14   职务:主机节点甚至数据节点、kafka/logstash

主机名称:node02   IP: 192.168.2.15   任务:主机节点以至数据节点、kibana

主机名称:node03   IP: 192.168.2.17   职务:主机节点以至数据节点、Elasticstack-head插件

主机名称:test  IP: 192.168.2.70   任务:顾客端

注:分配内部存款和储蓄器建议超出2G

测量检验服务器设置:

配置hosts(/etc/hosts)

192.168.2.14  node01

192.168.2.15  node02

192.168.2.17  node03

关闭防火墙&Sellinux

配置yum源:

#yum -y install epel-release

岁月同步:

#rpm -qa |grep chrony

布局时间同步源:# vi /etc/chrony.conf

# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 0.rhel.pool.ntp.org iburst
server 1.rhel.pool.ntp.org iburst
server  10.100.2.5              iburst

重启时间一同服务:# systemctl restart chronyd.service

node01和node02安装配备JDK:

#yum install java-1.8.0-openjdk  java-1.8.0-openjdk-devel  #安装openjdk

 

1)标准措施布置情况变量:

vim  /etc/profile
将下面的三行粘贴到 /etc/profile中:
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.121-0.b13.el7_3.x86_64
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin

    2)保存关闭后,推行:source  /etc/profile  #让设置即刻生效。

[root@~]# echo $JAVA_HOME
[root@ ~]# echo $CLASSPATH
[root@ ~]# echo $PATH

测量试验是或不是安装配置成功

# java  -version
openjdk version "1.8.0_121"
OpenJDK Runtime Environment (build 1.8.0_121-b13)
OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)

    3)下载相应的零器件到/home/soft

    #wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.2.zip
    #wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.2-linux-x86_64.tar.gz
    #wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.2.zip
    #wget http://apache.mirrors.lucidnetworks.net/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz

五、node01节点安装陈设elasticsearch

1、创建elk用户、组

[root@node01 soft]groupadd elk
[root@node01 soft]useradd -g elk elk

2、elasticsearch解压至/usr/local/目录下

[root@node01 soft]#unzip elasticsearch-5.1.2.zip -d /usr/local/

3、成立data/db和data/logs分别存储数据文件和日志文件

[root@node01 soft]# mkdir -pv /data/{db,logs}

4、授权data/db和data/logs、/usr/local/elasticsearch-5.1.2文件夹elk客户及顾客组读取权限

[root@node01 soft]chown elk:elk /usr/local/elasticsearch-5.1.2 -R
[root@node01 soft]chown elk:elk /data/{db,logs} -R

5、编辑/usr/local/elasticsearch-5.1.2/config/elasticsearch.yml 校订为如下参数:

[root@node01 config]# vim elasticsearch.yml
cluster.name: ELKstack-5
node.name: node01
path.data: /data/db
path.logs: /data/logs
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.2.14","192.168.2.15","192.168.2.17"]
discovery.zen.minimum_master_nodes: 2
xpack.security.enabled: false #关闭es认证 与kibana对应,不然后面安装x-pack需要用户名密码验证

注:

cluster.name: ELKstack-5  #集群的名字(可率性取名称)

node.name: node01  #换个节点名字

network.host: 0.0.0.0  #监听地址,0.0.0.0代表任意机器能够访谈

http.port: 9200  #可默认

http.cors.enabled: true   #head插件能够访谈es

http.cors.allow-origin: "*"

discovery.zen.ping.unicast.hosts: 集群中master节点的初阶列表,能够透过这一个节点来机关开采新参加集群的节点

discovery.zen.minimum_master_nodes: 大选叁个Master必要某些个节点(起码候选节点数卡塔 尔(英语:State of Qatar),平日设置成 N/2 1,N是集群中节点的数目

    xpack.security.enabled: false #关闭es认证 与kibana对应,禁止使用了注脚作用,如若启用了证实,访谈时供给钦定客商名密码

 

6、遵照elk运市场价格况,必要改正以下参数(更正参数今后提议重启机器)

1)[root@node01 config]# vim /etc/security/limits.conf  #改革节制参数,允许elk顾客访谈mlockall

# allow user 'elk mlockall
elk soft memlock unlimited
elk hard memlock unlimited
*  soft nofile 65536
*  hard nofile 131072
*  soft nproc 2048
*  hard nproc 4096

2)[root@node01 config]# vim /etc/security/limits.d/20-nproc.conf  #改革可张开的文书汇报符的最大数(软节制)

修改如下内容:
* soft nproc 4096
#修改为
* soft nproc 2048

3)[root@node01 config]# vim /etc/sysctl.conf   #约束多少个进度能够享有的VMA(设想内部存款和储蓄器区域)的数码

加上上面配置:

vm.max_map_count=655360

[root@node01 config]# sysctl -p #刷新修改参数使其生效

 

4)校订jvm空间分配,因为elasticsearch5.x私下认可分配jvm空间尺寸为2g

[root@node01 elasticsearch-5.1.2]# vim config/jvm.options  
-Xms2g  
-Xmx2g

修改为

[root@node01 elasticsearch-5.1.2]# vim config/jvm.options  
-Xms512m  
-Xmx512m

要不会报以下错误:

OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x000000008a660000, 1973026816, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1973026816 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/local/elasticsearch-5.1.2/hs_err_pid11986.log

 

5)运营elasticsearch服务,注:elasticsearch暗中认可分裂意root客商运行服务,切换至普通客商运维

[root@node01 elasticsearch-5.1.2]#su - elk
[elk@node01 elasticsearch-5.1.2]$cd /usr/local/elasticsearch-5.1.2
[elk@node01 elasticsearch-5.1.2]$nohup ./bin/elasticsearch &
[elk@node01 elasticsearch-5.1.2]$./elasticsearch -d  #ElasticSearch后端启动命令

注:结束服务(ps -ef |grep elasticsearch 、kill PID)

 

6)运维后翻看进程是或不是监听端口9200/9300,何况浏览器访谈

[root@node01 ~]# ss -tlnp |grep '9200'
LISTEN   0  128  :::9200        :::*         users:(("java",pid=2288,fd=113)
[root@node01 ~]# curl http://192.168.2.14:9200
{
  "name" : "node01",
  "cluster_name" : "ELKstack-5",
  "cluster_uuid" : "jZ53M8nuRgyAKqgQCDG4Rw",
  "version" : {
    "number" : "5.1.2",
    "build_hash" : "c8c4c16",
    "build_date" : "2017-01-11T20:18:39.146Z",
    "build_snapshot" : false,
    "lucene_version" : "6.3.0"
  },
  "tagline" : "You Know, for Search"
}

 

六、相通node01节点安装elasticsearch布署node02、node03节点

1、安装配置node02节点elasticsearch

1)编辑/usr/local/elasticsearch-5.1.2/config/elasticsearch.yml 校勘为如下参数:

[root@node02 config]# vim elasticsearch.yml
cluster.name: ELKstack-5
node.name: node02
path.data: /data/db
path.logs: /data/logs
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.2.14","192.168.2.15","192.168.2.17"]
discovery.zen.minimum_master_nodes: 2
xpack.security.enabled: false #关闭es认证 与kibana对应

    注:其余安顿计划同node01

 

2、安装配备node03节点elasticsearch

1)编辑/usr/local/elasticsearch-5.1.2/config/elasticsearch.yml 校勘为如下参数:

[root@node02 config]# vim elasticsearch.yml
cluster.name: ELKstack-5
node.name: node03
path.data: /data/db
path.logs: /data/logs
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.2.14","192.168.2.15","192.168.2.17"]
discovery.zen.minimum_master_nodes: 2
xpack.security.enabled: false #关闭es认证 与kibana对应

    注:其他安插铺排同node01

 

3、3个节点(node01,node02,node03)运营后,查看集群是不是正规,节点是还是不是正规

    常用查询命令如下:

查看集群状态:curl -XGET

翻看集群节点:curl -XGET

查询索引列表:curl -XGET

创办索引:curl -XPUT

查询索引:curl -XGET

删去索引:curl -XDELETE

[root@node01 ~]# curl -XGET http://localhost:9200/_cat/health?v
epoch      timestamp cluster    status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1486384674 20:37:54  ELKstack-5 green           3         3      0   0    0    0        0             0                  -                100.0%
[root@node01 ~]# curl -XGET http://localhost:9200/_cat/nodes?v
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.2.17           22          94   3    0.58    0.55     0.27 mdi       *      node03
192.168.2.15           22          93   0    0.59    0.60     0.29 mdi       -      node02
192.168.2.14           22          93   1    0.85    0.77     0.37 mdi       -      node01

 

七、node3(192.168.2.17)节点上设置head插件(由于elasticsearch5.0版本变化非常的大,最近elasticsearch5.0 一时不帮助直接设置)

1、在从github上边下载代码,因此先要安装git,授权文件和目录(777)

[root@node03 local]# yum install git
[root@node03 local]# git clone git://github.com/mobz/elasticsearch-head.git
[root@node03 local]# chmod 777 -R elasticsearch-head/*

 

2、下载Node.js,并解压,配置进碰着变量

[root@node03 soft]# wget https://npm.taobao.org/mirrors/node/latest-v4.x/node-v4.6.1-linux-x64.tar.gz
[root@node03 soft]# tar -xvf node-v4.6.1-linux-x64.tar.gz #解压至当前目录
[root@node03 soft]#vim /etc/profile
添加如下: export PATH=/home/soft/node-v4.6.1-linux-x64/bin:$PATH 
[root@node03 soft]#source  /etc/profile #使配置文件生效。

 

3、在/usr/local/elasticsearch-head/目录下,进行npm install 使用node.js安装

[root@node03 elasticsearch-head]# npm install -g cnpm --registry=https://registry.npm.taobao.org
[root@node03 elasticsearch-head]# npm install grunt --save-dev

 

4、改革目录/usr/local/elasticsearch-head/Gruntfile.js

connect: {
    server: {
        options: {
            port: 9100,
            hostname: '0.0.0.0',
            base: '.',
            keepalive: true
        }
    }
}

增加hostname属性,设置为*或'0.0.0.0'

 

5、修正/usr/local/elasticsearch-5.1.2/config/elasticsearch.yml配置文件,扩大一下布局,重新启航ES服务

# 以下三个为允许跨域,主若是5.1版本的head插件和过去安装的不少年老成致

http.cors.enabled: true
http.cors.allow-origin: "*"

 

6、改善目录/usr/local/elasticsearch/plugins/head/_site/Gruntfile.js

connect: {
    server: {
        options: {
            port: 9100,
            hostname: '0.0.0.0',
            base: '.',
            keepalive: true
        }
    }
}

增加hostname属性,设置为*或'0.0.0.0'

 

7、修改/usr/local/elasticsearch-head/_site/app.js连接地址:

改正head的三番四次地址:

this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://localhost:9200";

把localhost修正为es的服务器地址,如:

this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.2.17:9200";

 

8、校正/usr/local/elasticsearch-5.1.2/config/elasticsearch.yml配置文件,扩展一下安顿,重新启航ES服务

# 以下七个为允许跨域,首假使5.1版本的head插件和过去设置的不一致等

http.cors.enabled: true
http.cors.allow-origin: "*"

9、消释信任并运维服务

    实践npm install下载依赖的包:

    [root@node03 elasticsearch-head]#npm install
    [root@node03 elasticsearch-head]#./node_modules/grunt/bin/grunt serverb & #后台启动服务

测量检验访谈:

    图片 8

 

八、node2(192.168.2.15)节点上设置配备kibana

1、kibana解压至/usr/local/目录下

[root@node02 soft]# tar -xvf kibana-5.1.2-linux-x86_64.tar.gz -C /usr/local/

2、修改/usr/local/kibana-5.1.2-linux-x86_64/config/kibana.yml配置文件,如下:并运行kibana服务

[root@node02 soft]#vim /usr/local/kibana-5.1.2-linux-x86_64/config/kibana.yml  
server.port: 5601
server.host: "192.168.2.15"
elasticsearch_url: "http://192.168.2.15:9200"
xpack.security.enabled: false  #关闭认证,为后面kibana增加x-pack组件免去用户名密码认证

[root@node02 kibana-5.1.2-linux-x86_64]# bin/kibana > /var/log/kibana.log 2>&1 &  #启动服务

图片 9

 

九、配置客商端test节点(192.168.2.70)

1、安装配备JDK(同node01~node03,这里不再解说)

2、拷贝logstash至顾客端,并解压至/usr/local/目录下

[root@node02 config]# scp /home/soft/logstash-5.1.2.zip  root@192.168.2.70:/home/soft/
[root@test soft]#unzip logstash-5.1.2.zip -d /usr/local/

 

3、编辑logstash服务管理脚本(配置路线可依据实情匡正)

本文由67677新澳门手机版发布于67677新澳门手机版,转载请注明出处:集群日志系统布局及实践,ELK初学搭建

关键词: