In Elasticsearch family, there is a three that collects, analyses and plots logs, which I found quite beneficial. The three is Logstash + Elasticsearch + Kibana.
Logstash collects logs with various formats and parses them by using a given recipe. then parsed logs are stored into elasticsearch or another selected back-end. After all, using indexes in elasticsearch, Kibana will let you analyse logs in various ways with great visuality.
In this post, I will quickly show how to work on openstack-nova logs on an already set up environment.
First let's forward nova logs to where logstash is listening. To do it, we will use logstash-forwarder which will run on where logs are located. Its job is very simple: going to log files and forward each line in it. Your /etc/logstash-forwarder should look like this:
In logstash side, which listens and indexes logs, you need to specify how Logstash will parse logs. For this one, we will edit filter part. In order to build my environment, I have followed this guide. In this one, the filter part is defined in /etc/logstash/conf.d/10-syslog.conf. We will simply edit this one.
Finally, in your Kibana dashboard, by doing some configuration, you can get good-looking representation of nova logs. Here is a table populated by nova logs:
Logstash collects logs with various formats and parses them by using a given recipe. then parsed logs are stored into elasticsearch or another selected back-end. After all, using indexes in elasticsearch, Kibana will let you analyse logs in various ways with great visuality.
In this post, I will quickly show how to work on openstack-nova logs on an already set up environment.
First let's forward nova logs to where logstash is listening. To do it, we will use logstash-forwarder which will run on where logs are located. Its job is very simple: going to log files and forward each line in it. Your /etc/logstash-forwarder should look like this:
{
"network": {
"servers": [ "10.10.10.10:5000" ],
"timeout": 15,
"ssl ca": "part/to/crt"
},
"files": [
{
"paths": [
"/var/log/nova/nova-*.log"
],
"fields": { "type": "nova" }
}
]
}
In network object, do necessary changes as basic configuration of logstah-forwarder. In files array, paths of logs files to be forwarded are defined with a specific type. In our case, it is nova.In logstash side, which listens and indexes logs, you need to specify how Logstash will parse logs. For this one, we will edit filter part. In order to build my environment, I have followed this guide. In this one, the filter part is defined in /etc/logstash/conf.d/10-syslog.conf. We will simply edit this one.
filter{
if [type] == "nova" {
grok {
match => {"message" => "%{TIMESTAMP_ISO8601:timestamp} %{NUMBER:pid} %{LOGLEVEL:loglevel} %{NOVA_MODULE:nova_module} (?:%{DATA})"}
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSS" ]
}
if "_grokparsefailure" in [tags] {
drop { }
}
}
}
For filtering ,we have three parts: grok, date, and failure part. Grok parses the gathered logs. With the grok format above, we are indexing timestamp, process id, log level and nova module. Optionally, you can increase number of indexes here such as request_id may be split, by editing match filter in grok. Date is a format of how timestamp will be saved. The last part functions to ignore logs which don't suite to our match case. Finally, in your Kibana dashboard, by doing some configuration, you can get good-looking representation of nova logs. Here is a table populated by nova logs:
Comments
Post a Comment