Centralized Logging EFK — Helm (Elasticsearch, Fluentd, Kibana)

Rajpratap Singh
5 min readJan 14, 2021

--

EFK Flow

Introduction

Centralized logging is used to identify issues with your servers/applications, as it gives you the flexibility to search all of your logs from a centralized single place & allows you to go in the backdate to cross-verify issues by correlating all logs in a specific time frame.

What is the EFK Stack?

EFK Stack is a collection of three open-source products — Elasticsearch, Fluentd, & Kibana. Together, these 3 different components are most commonly used for monitoring, troubleshooting & securing IT environments. Also if we want to know what is going on with the backend infrastructure, or to avoid any unauthorized access, the best way is to use centralized log collecting and monitoring.

Why Log Analysis is Important?

Nowadays, organizations cannot afford even one second of downtime, slow performance & compromise with their infrastructure security. Performance/Security issues can damage a brand image & may result in a direct revenue loss. To ensure apps are available, performant & secure at all times, engineers rely heavily on the different types of data generated by their applications and the infrastructure supporting them.

Logs have always been an important source of data for organizations. The volume of data generated constantly by environments(Dockers, Containers, Microservices, & Orchestration Infrastructure) is constantly growing and creates a challenge in itself where we have hundreds of containers generating TBs of log data a day. Centralized Logging with EFK allows engineers, whether DevOps or IT Operations to gain visibility & ensure IT-Infrastructure is available and performant at all times.

Centralized Logging & Analysis should have the following capabilities:

  • Log-Forwarder — Forwarding logs from multiple nodes (application servers) to aggregators.
  • Log-Aggregator — Collecting & Shipping logs from multiple data sources.
  • Log-Processing —Transforming log messages into meaningful data for easier analysis.
  • Log-Storage — Storing data for some definite extended time periods to allow for monitoring & security use cases. It can be done with K8's-PV.
  • Log-Analysis — Filtering the useful data by querying it and creating visualizations and dashboards on top of it. It can be done with Kibana-Queries.

Pre-Requisites ->

* Ubuntu VM  >>  (CPU=10 , RAM=16 GB, HDD=80GB)
* Docker On Ubuntu
* Kubernetes On Ubuntu
* Helm Installed with Tiller

Add Helm-Charts stable repo to install EFK stack->

* helm repo add stable https://charts.helm.sh/stable

1> Install Elastic-Search using Helm:->

To install the Elastic Search Helm Chart with stable Repo on Ubuntu-VM with the above-mentioned configuration, disable persistence for both master & data in es-values.YAML & run the following command:

$ helm install --name elasticsearch stable/elasticsearch --namespace=logging -f /root/es-values.YAML

Now, wait for (7–10) minutes to create all the required components. After that, we can check the created pods using the command:

Also, check for the created services with the command:

2> Install Kibana using Helm:->

To access the kibana dashboard easily, we’ll override the service type from ClusterIP (Default Value) to LoadBalancer. That will create a public IP address. In Helm, we’ll override config in values.YAML using another YAML file as kib-values.YAML. So create a file with the following content:

files:
kibana.yml:
server.name: kibana
server.host: "0"
elasticsearch.hosts: http://elasticsearch-client:9200
service:
type: LoadBalancer

Now run the following command to install Kibana using Helm:

$ helm install --name kibana stable/kibana --namespace=logging -f /root/kib-values.YAML

Wait for few seconds & then we can check that Pod is up & running:

3> Install Fluentd using Helm:->

Now we need to install Fluentd as an Daemonset to collect logs from for Kubernetes cluster, use following command:

$ helm install — name fluentd stable/fluentd-elasticsearch — namespace=logging

Wait for few seconds & verify fluentd daemonset is up & running:

4> Verify Fluentd Log Indices ->

After we successfully installed all EFK stack using Helm, now we can verify Log Indices from previous (2–3) weeks, use the following command:

$ curl http://X.X.X.X:9200/_cat/indices?v

5> Accessing Elastic Search & Kibana DASHBOARD->

$ while true; do kubectl port-forward — address X.X.X.X svc/elasticsearch-client 9200 -n logging; done
$ while true; do kubectl port-forward — address X.X.X.X deployment/kibana 5601:5601 -n logging; done

Now access Kibana Dashboard on URL:

http://X.X.X.X:5601

6> View Sample Logs->

Let’s create a sample namespace in K8’s & verify logs in Kibana, using command:

Now search:- fluentd-pvc-test in Kibana search box.

Conclusion ->

Ending this blog here as beginners can use the above-mentioned steps to deploy an EFK stack from scratch & monitor K8’s logs.

We can use Kibana-Queries to filter logs from backdating to analyze issues.

Please feel free to ask your thoughts, questions & feedback in comments.

Connect with me @ https://www.linkedin.com/in/rajprataprps/

--

--

Rajpratap Singh
Rajpratap Singh

Responses (2)