Graylog2.2详细部署安装

2023年 7月 15日 122.8k 0

elk不是很善于处理多行日志,同样也不能保留原始日志格式,Graylog可以收集监控多种不同应用的日志,包括文件,系统日志,在客户端方面支持手工插件等,而且官网提供多种安装方式,安装便捷,部署在一台主机即可,并且他可以进行规则报警,我们来看如下:

1,一体化方案,安装方便,不像ELK有3个独立系统间的集成问题。2,采集原始日志,并可以事后再添加字段,比如http_status_code,response_time等等。3,自己开发采集日志的脚本,并用curl/nc发送到Graylog Server,发送格式是自定义的GELF,Flunted和Logstash都有相应的输出GELF消息的插件。自己开发带来很大的自由度。实际上只需要用inotifywait监控日志的modify事件,并把日志的新增行用curl/netcat发送到Graylog Server就可。4,搜索结果高亮显示,就像google一样。5,搜索语法简单,比如: source:mongo AND reponse_time_ms:>5000,避免直接输入elasticsearch搜索json语法6,搜索条件可以导出为elasticsearch的搜索json文本,方便直接开发调用elasticsearch rest api的搜索脚本。本次安装最新版本2.2graylog.pngGraylog(和ELK)具有特殊的操作模式(日志处理固有的),定期创建新的索引。因此,由于架构已经在较高级别(跨索引)上划分,因此不需要分割每个单独的索引。保留=保留标准*群集中最大索引数。官网:graylog.org官网文档:http://docs.graylog.org/en/latest/index.html

mongodb安装docker安装:

curl -Lk https://raw.githubusercontent.com/LinuxEA-Mark/docker-mongodb/master/docker_install_mongodb.sh|bask

常规安装:


[root@linuxea.com ~]# yum install java-1.8.0-openjdk-headless.x86_64 epel-release pwgen -y
[root@linuxea.com ~]# cat > /etc/yum.repos.d/mongodb-org-3.2.repo << EOF
[mongodb-org-3.2]
name = MongoDB Repository
baseurl = https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.2/x86_64/
gpgcheck = 1
enabled = 1
gpgkey = https://www.mongodb.org/static/pgp/server-3.2.asc
EOF
[root@linuxea.com ~]# yum install mongodb-org -y
[root@linuxea.com ~]# chkconfig --add mongod
[root@linuxea.com ~]# systemctl daemon-reload
[root@linuxea.com ~]# systemctl enable mongod.service
[root@linuxea.com ~]# systemctl start mongod.service

elasticsearch安装

[root@linuxea.com ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
[root@linuxea.com ~]# cat > /etc/yum.repos.d/elasticsearch.repo << EOF
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=https://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
EOF
[root@linuxea.com ~]# yum install elasticsearch -y

配置:

[root@linuxea.com /etc/rsyslog.d]# egrep -v "^$|^#" /etc/elasticsearch/elasticsearch.yml 
cluster.name: graylog
path.data: /data/Elasticsearch
path.logs: /data/Elasticsearch/logs
network.host: 10.10.240.117
http.port: 9200

启动:

[root@linuxea.com ~]# chkconfig --add elasticsearch
[root@linuxea.com ~]# systemctl daemon-reload
[root@linuxea.com ~]# systemctl enable elasticsearch.service
[root@linuxea.com ~]# systemctl restart elasticsearch.service

graylog-server安装:

[root@linuxea.com ~]# yum install -y java-1.8.0-openjdk-headless.x86_64
[root@linuxea.com ~]# rpm -Uvh https://packages.graylog2.org/repo/packages/graylog-2.2-repository_latest.rpm
[root@linuxea.com ~]# yum install graylog-server -y 
[root@linuxea.com /data/mongodb]# cat /etc/graylog/server/server.conf
is_master = true
node_id_file = /etc/graylog/server/node-id
root_username = admin
root_timezone = Asia/Shanghai
password_secret = ef92b778bafe771e89245b89ecbc08a44a4e166c06659911881f383d4473e94f
root_password_sha2 = 5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8
plugin_dir = /usr/share/graylog-server/plugin
rest_listen_uri = http://10.10.240.117:9000/api/
web_listen_uri = http://10.10.240.117:9000/
web_endpoint_uri = http://10.10.240.117:9000/api
web_enable = true
web_enable_cors = true
elasticsearch_discovery_zen_ping_unicast_hosts = 10.10.240.117:9300
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 1
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
elasticsearch_cluster_name = graylog
allow_leading_wildcard_searches = false
allow_highlighting = true
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://localhost/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load = grok-patterns.json
proxied_requests_thread_pool_size = 32

直接admin,password登陆20170329141658.png

相关文章

对接alertmanager创建钉钉卡片(1)
手把手教你搭建OpenFalcon监控系统
无需任何魔法即可使用 Ansible 的神奇变量“hostvars”
openobseve HA本地单集群模式
基于k8s上loggie/vector/openobserve日志收集
openobseve单节点和查询语法

发布评论