ELK5.5redis日志grok处理(filebeat)

2023年 7月 15日 105.9k 0

收集notice的log日志,日志中有很多信息,常规的收集一些报错信息即可,所以就在收集时候做了控制,只收集错误日志和警告日志

include_lines: ["WARNING","ERR"]

include_lines 一个正则表达式的列表,以匹配您希望Filebeat包含的行。Filebeat仅导出与列表中正则表达式匹配的行。默认情况下,导出所有行。exclude_files 一个正则表达式的列表,以匹配您想要Filebeat忽略的文件。默认情况下不排除文件参考:https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html多行参考:https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html#multiline

日志示例

1:M 08 Sep 11:42:43.806 # Server started, Redis version 3.2.9
1:M 08 Sep 11:41:44.806 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 08 Sep 11:12:32.806 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 08 Sep 11:12:32.822 * DB loaded from disk: 0.016 seconds
1:M 08 Sep 11:12:32.822 * The server is now ready to accept connections on port 6379
                _._                                                  
           _.-``__ ''-._                                             
      _.-``    `.  `_.  ''-._           Redis 3.2.9 (00000000/0) 64 bit
  .-`` .-```.  ```/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 1
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'    
1:M 08 Sep 11:40:45.806 # ERROR 123

最终收集的效果如下:ERRredis-error.jpgWARNINGredis-warring.jpg

安装redis

curl -Lks4 https://raw.githubusercontent.com/LinuxEA-Mark/docker-alpine-Redis/master/Sentinel/install_redis.sh|bash

redis日志配置

loglevel notice
logfile "/data/logs/redis_6379.log"

配置文件示例:

[root@linuxea.com-Node98 /data/rds]# cat /etc/redis/redis.conf
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile "/var/run/redis_6379.pid"
loglevel notice
logfile "/data/logs/redis_6379.log"
databases 8
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump.rdb"
dir "/data/redis"
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 100
slowlog-max-len 1000
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

masterauth "OTdmOWI4ZTM4NTY1M2M4OTZh"
requirepass "OTdmOWI4ZTM4NTY1M2M4OTZh"
# Generated by CONFIG REWRITE
#slaveof 172.25. 6379

filebeat配置

[root@linuxea.com-Node117 /data/logs]# cat /etc/filebeat/filebeat.yml 
filebeat.prospectors:
 - input_type: log
   paths:
    - /data/logs/access_nginx.log
   document_type: nginx-access-117
 - input_type: log
   paths:
    - /data/logs/slow_log.CSV
   document_type: mysql-slow-117
 - input_type: log
   paths:
    - /data/logs/redis_6379.log
   document_type: redis-6379-117
   include_lines: ["WARNING","ERR"]
output.redis:
  hosts: ["10.10.0.98"]
  password: "OTdmOWI4ZTM4NTY1M2M4OTZh"
  key: "default_list"
  db: 5
  timeout: 5
  keys:
    - key: "%{[type]}"
      mapping:
      "nginx-access-117": "nginx-access-117"
      "mysql-slow-117" : "mysql-slow-117"
      "redis-6379-117" : "redis-6379-117"

logstash配置

patterns

[root@linuxea.com-Node49 /etc/logstash/patterns.d]# cat redis 
REDISTIMESTAMP %{MONTHDAY} %{MONTH} %{TIME}
REDISLOG %{POSINT:pid}:%{WORD:role} %{REDISTIMESTAMP:timestamp} %{DATA:loglevel} %{GREEDYDATA:msg}

logstash配置文件input

   redis {
        host => "10.10.0.98"
        port => "6379"
        key => "redis-6379-117"
        data_type => "list"
        password => "OTdmOWI4ZTM4NTY1M2M4OTZh"
        threads => "5"
        db => "5"
      }

filter

    if [type] == "redis-6379-117" {
     grok {
      patterns_dir => "/etc/logstash/patterns.d"
      match => { "message" => "%{REDISLOG}" }
    }
    mutate {
      gsub => [
       "loglevel", ".", "debug",
       "loglevel", "-", "verbose",
       "loglevel", "*", "notice",
       "loglevel", "#", "warring",
       "role","X","sentinel",
       "role","C","RDB/AOF writing child",
       "role","S","slave",
       "role","M","master"
      ]
    }
    date {
       match => [ "timestamp" , "dd MMM HH:mm:ss.SSS" ]
       target => "@timestamp"
       remove_field => [ "timestamp" ]
    }
  }

output

   if [type] == "redis-6379-117" {
   elasticsearch {
       hosts => ["10.0.1.49:9200"]
       index => "logstash-redis-6379-117-%{+YYYY.MM.dd}"
       user => "elastic"
       password => "linuxea"
   }

相关文章

对接alertmanager创建钉钉卡片(1)
手把手教你搭建OpenFalcon监控系统
无需任何魔法即可使用 Ansible 的神奇变量“hostvars”
openobseve HA本地单集群模式
基于k8s上loggie/vector/openobserve日志收集
openobseve单节点和查询语法

发布评论