Have you ever found yourself in a place where reading logs feels inefficient
because your best way of doing this is grepping through plain text? Do you
miss the experience of Kibana, which embraces histograms, visualizations, and
filtering, but your working environment does not provide that?
Well, that had been the case for me, but with a bit of effort, I’ve managed to
find a good-enough solution. It bases on running Kibana on your local
machine, so there’s no additional server cost for your company. Together with
the setup of ingesting logs from the file, it provides a very flexible way
of running Kibana on demand.
Log storage together with the whole Kibana stack may indeed cost an arm and
a leg, so companies may be hesitant to incorporate it into their infrastructure.
Using the approach described below might be a quick win for you if you need
Kibana ad hoc.
Checkout a specific tag, so that all components are compatible with each
other. At the time of writing this post, the newest one is v7.17.3.
git checkout tags/v7.17.3
Run Elasticsearch, Logstash and Kibana. Commands below will create
Helm releases.
cd elasticsearch/examples/minikube
make install
cd ../../../logstash/examples/elasticsearch
make install
cd ../../../kibana/examples/default
make install
You should be able to see the pods up and running. If instead of
Running status, you see ImagePullBackOff, see below.
➜ ~ kubectl get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 1/1 Running 0 100m
elasticsearch-master-1 1/1 Running 0 100m
elasticsearch-master-2 1/1 Running 0 100m
helm-kibana-default-kibana 1/1 Running 0 53s
helm-logstash-elasticsearch 1/1 Running 0 3m51s
What to do if your pods are in ImagePullBackOff status? This means that
downloading Docker image have reached timeout or failed. There can be multiple
reasons for that. You can investigate it further by running:
kubectl describe pod elasticsearch-master-0
If the reason is, for example, slow internet connection, you can load the
image manually, as shown below:
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.17.3
minikube image load docker.elastic.co/elasticsearch/elasticsearch:7.17.3
minikube image list
Enable port forwarding. After that you should be able to access
http://localhost:5601
and see Kibana interface.
Congratulations, you now have the whole ELK stack up and running on your local
machine and you can even load some logs manually from a file in order to get
into play quickly. That was fast and easy, wasn’t it?
🍼 Ingesting automatically from a file
You can already load logs manually, but you may want to utilize more automatic
approach in order to follow them in the real time, as they appear.
The first thing you'll need is a log file that you can follow. I want to read
the logs of my service running in production, so I'll fetch them from Kubernetes
using the --follow flag and pipe them into the file
tmp.log in my home directory.
Having the logs now streamed into the file in my local system, I can try
ingesting them into Elasticsearch. In order to do that,
Filebeat
seems like a good fit. Let's download and unpack it.
Please note that I'm using the same version as the rest of the stack:
7.17.3. You may also need to choose a package that fits
your operating system in the
download
page.
curl -L-O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.3-darwin-x86_64.tar.gz
tar xzvf filebeat-7.17.3-darwin-x86_64.tar.gz
Then we'll need to tune Filebeat's configuration a little. Enter the directory,
open filebeat.yml, and configure filebeat.inputs, so
that it uses your local log file.
Filebeat obviously has plenty other options that can be configured.
You can find more details in the
documentation.
We're almost ready to go. We yet need to enable port forwarding for Elasticsearch,
so that Filebeat can communicate with it, and then let's start it up!
You’ll be able to see Filebeat logs claiming that the connection has been
established and that it periodically scans for new changes. Navigate to
http://localhost:5601/app/discover.
At first, you’ll need to create an index pattern. Just type filebeat*
and return to the Discover page.
You should be able to see your logs, congratulations! You now have a powerful
tool in hand and it didn’t require much time nor a complex setup.
🧪 Further tuning
In order to utilize the advantages that Kibana has to offer, you may want to
use ingest pipelines, which allow you to parse your logs into individual fields
that you can use for filtering and searching.
It’s very simple if you already have one created, because
in such case the only thing you need to do is to set its name in
filebeat.yml under
filebeat.inputs[].pipeline
property.
If you don’t have any and you don’t know how to create one, check out the way of
uploading files manually.
It provides a feature that automatically detects your log format and tries to
create an ingest pipeline for them. It’s not ideal, but may be good enough for
your needs.
🧹 Cleaning up
When you’re done, you can stop the whole stack by running
helm uninstall for all three Helm releases that you’ve created.
You can see their names by running helm list.
Alternatively, you can stop the whole minikube cluster by running
minikube stop, but pods will start once you start the cluster
again.
🎉 That’s it, enjoy reading your logs! And let me know if it worked for you!