由于一些原因,我需要在内网搭建elk平台,采取云机器的日志,并且云节点并不是一家的,这就意味着这些云机器内网不通,分布广泛在内网搭建elk环境,并且只想用拉取的模式,也就是说,我内网并没有ip想被外网调用(无NAT),只要内网能上网就要可以用内网设备资源成本低基于以上三点来配置如下场景:散列的云节点往一台(意淫中的是集群组)redis(kafka密码配置过于复杂)节点接入数据,而后通过内网elk去抓取redis的日志到本地需要注意的是redis的防火墙规则匹配好,涉及到安全(有功夫的同学直接撸kafka)
我们可以去官网下载RPM包和tar.gz二进制包来进行安装,我在这里分别都测试过,均用作x-pack的破解测试(后面会有破解的例子)
先决条件:
安装jdk
yum install http://10.10.240.145/windows-client/jdk/jdk-8u171-linux-x64.rpm -y
如果链接10.10.240.145 失败,不要紧张,10.10.240.145是我内网的mirrors (^_^)
-
修改文件系统参数:
echo "vm.max_map_count=262144" >> /etc/sysctl.conf echo "elk - nofile 65536" >> /etc/security/limits.conf
一,elasticsearch node install
下载elasticsearch安装包并安装在elasticsearch的节点(这里用内网的mirrors下载使用的)1,创建用户2,创建db和logs目录3,备份原来的配置文件4,修改属主给解压目录和数据目录已经日志目录
curl -Lk http://10.10.240.145/elk/elasticsearch-6.3.2.tar.gz|tar xz -C /usr/local/ && useradd elk && cd /usr/local/ && ln -s elasticsearch-6.3.2 elasticsearch && mkdir /data/elasticsearch/{db,logs} -p && chown -R elk.elk /data/elasticsearch/ /usr/local/elasticsearch* && cd elasticsearch/config/ && mv elasticsearch.yml elasticsearch.yml.bak
1.2 elasticsearch 配置文件
elk配置文件分为三份,一份node1。一份node2 ,一份协调节点,所差不大
1.2.1 elasticsearch_node1
cluster.name: linuxea-app_ds node.name: master path.data: /data/elasticsearch/db path.logs: /data/elasticsearch/logs bootstrap.system_call_filter: false bootstrap.memory_lock: false http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization network.host: 10.10.240.113 http.port: 9200 node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["10.10.240.113"] xpack.security.enabled: false
启动
[root@linux-vm-Node113 ~]# sudo -u elk /usr/local/elasticsearch-6.3.2/bin/elasticsearch -d
1.2.2 elasticsearch_node2
[root@linux-vm-Node114 /usr/local/elasticsearch-6.3.2/config]# cat /usr/local/elasticsearch-6.3.2/config/elasticsearch.yml cluster.name: linuxea-app_ds node.name: slave path.data: /data/elasticsearch/db path.logs: /data/elasticsearch/logs bootstrap.system_call_filter: false bootstrap.memory_lock: false http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization network.host: 10.10.240.114 http.port: 9200 node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["10.10.240.113"] #xpack.monitoring.collection.enabled: true xpack.security.enabled: false
启动
[root@linux-vm-Node114 ~]# sudo -u elk /usr/local/elasticsearch-6.3.2/bin/elasticsearch -d
1.2.3 放行防火墙
添加到配置文件中
-A INPUT -s 10.0.1.49 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "logstash" -j ACCEPT -A INPUT -s 10.10.240.117 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "kibana" -j ACCEPT -A INPUT -s 10.10.240.114 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "elasticsearch-114" -j ACCEPT -A INPUT -s 10.10.240.113 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "elasticsearch-113" -j ACCEPT
添加临时规则,放行9200和9300
iptables -I INPUT 5 -s 10.0.1.49 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "logstash" -j ACCEP iptables -I INPUT 5 -s 10.10.240.117 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "kibana" -j ACCEP iptables -I INPUT 5 -s 10.10.240.114 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "elasticsearch-114" -j ACCEP iptables -I INPUT 5 -s 10.10.240.113 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "elasticsearch-113" -j ACCEP
so,当node2启动后你应该关注日志,查看是否出错
二,配置es协调节点以及kibana
协调节点也就是所说的负载均衡,他将搜索请求或批量索引请求之类的请求,可能涉及保存在不同数据节点上的数据。例如,搜索请求在两个阶段中执行,这两个阶段由接收客户端请求的节点 到协调节点协调分散阶段,协调节点将请求转发到保存数据的数据节点。每个数据节点在本地执行请求并将其结果返回给协调节点。在收集 阶段,协调节点将每个数据节点的结果减少为单个全局结果集
node.master
,node.data
并node.ingest
设置为false
仅作为协调节点2.1 配置协调节点
-
这里安装用的二进制包,使用elk用户启动
[root@linux-vm-Node49 ~]# curl -Lk http://10.10.240.145/elk/elasticsearch-6.3.2.tar.gz|tar xz -C /usr/local/ [root@linux-vm-Node49 ~]# useradd elk [root@linux-vm-Node49 ~]# cd /usr/local/ && ln -s elasticsearch-6.3.2 elasticsearch [root@linux-vm-Node49 /usr/local]# mkdir /data/elasticsearch/{db,logs} -p [root@linux-vm-Node49 /usr/local]# chown -R elk.elk /data/elasticsearch/ /usr/local/elasticsearch-6.3.2 [root@linux-vm-Node49 /usr/local]# cd elasticsearch/config/ [root@linux-vm-Node49 /usr/local/elasticsearch/config]# mv elasticsearch.yml elasticsearch.yml.bak
-
协调节点配置文件协调节点和kibana在一台机器,负责转发
cluster.name: linuxea-app_ds node.name: coordinating path.data: /data/elasticsearch/db path.logs: /data/elasticsearch/logs bootstrap.system_call_filter: false bootstrap.memory_lock: false http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization network.host: 10.0.1.49 http.port: 9200 discovery.zen.ping.unicast.hosts: ["10.10.240.113"] node.master: false node.data: false node.ingest: false search.remote.connect: false node.ml: false xpack.security.enabled: false discovery.zen.minimum_master_nodes: 1
你仍然需要先决条件的配置,并且修改属主和属组并且启动
2.2 kibana install
[root@linux-vm-Node49 ~]# curl -Lk http://10.10.240.145/elk/kibana-6.3.2-linux-x86_64.tar.gz|tar xz -C /usr/local/ [root@linux-vm-Node49 ~]# mkdir /data/kibana/logs/ -p
server.name: kibana server.port: 5601 server.host: "10.0.1.49" elasticsearch.url: "http://10.10.240.113:9200" logging.dest: /data/kibana/logs/kibana.log #logging.dest: stdout logging.silent: false logging.quiet: false kibana.index: ".kibana" xpack.security.enabled: flash #xpack.monitoring.enabled: true #elasticsearch.username: "elastic" #elasticsearch.password: "linuxea"
需要注意,这里并没有启用x-pack,直接打开就撸的