Engineering

⌘K
  1. Home
  2. Docs
  3. Engineering
  4. Elasticsearch & Kibana

Elasticsearch & Kibana

We use Elasticsearch & Kibana for:

  1. Soluvas Geo. Currently this is the only live production usage. However, in the future, we plan to use Elasticsearch for:
  2. Centralized logging.
  3. Performance metrics.
  4. Operational metrics / time series data.
  5. Faceted search, full text search, and geospatial search, shadowing the MongoDB data.

Deployment notes:

  • If using Lightsail, it’s required to “Enable Lightsail VPC Peering” so other resources (like ALB) can access it.

Pricing compared to Amazon Elasticsearch:

  • Lightsail 1 GB is $5/mo with 40 GB storage
  • Amazon Elasticsearch t2.micro Singapore is $20.60/mo ($20.44 + $0.60 1 GB storage) (Note: t2.micro/t3.small does not support reserved instances)
  • Amazon Elasticsearch t3.medium Singapore is $81.92/mo with 1 GB storage
  • Amazon Elasticsearch t3.medium Singapore Reserved Instance with No Upfront is $65.86/mo with 1 GB storage. Important: Credits cannot apply to Full & Partial Upfront payments!
  • Fargate Spot Singapore: vCPU $0.015168/vCPU/hr, RAM $0.001659/GB/hr. 0.25 vCPU ($2.82) + 1 GB ($1.23) = $4.05.

Internal References:

Installing Single-node Elasticsearch & Kibana on Ubuntu VM

The goal is to have:

  • Authentication of “elastic” user be set up with strong password
  • Port 9200 serving Elasticsearch HTTP internally inside VPC, connectable from ALB’s target group
  • Port 5601 serving Kibana HTTP privately inside VPC, connectable from ALB’s target group
  • Port 9243:
    • path / serving Elastic HTTPS both privately and publicly via ALB
    • path /app/kibana serving Kibana HTTPS both privately and publicly via ALB
  • Port 9300 (Java transport client) disabled

Note about TLS: Elasticsearch will not start in multi-node production mode if TLS is disabled. So offloading TLS to ALB is just temporary.

Launch Lightsail VM & Prepare Ubuntu

Required RAM is 1 GB for deb packages. However, if using docker-compose method, required RAM becomes 2 GB, because pure 1 GB will often cause hang when starting docker-compose.

Current configuration as of May 22, 2021:

  • elasticsearch and kibana v7.x installed using deb packages.
  • Make sure bootstrap.memory_lock is ‘false’.
  • Using /etc/default/elasticsearch, control es01 JVM options to -Xms256m -Xmx256m. When using Docker, I have tried 256m and 300m and it did too much GC. Seems to be “more stable” with deb packages instead of Docker.
  • Kibana is a node app so there’s no JVM options

Install Elasticsearch & Kibana using Ubuntu Package Repository

Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo apt -y install apt-transport-https
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list
sudo apt-get update && sudo apt-get -y install elasticsearch
# Start Elasticsearch automatically
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service
sudo systemctl restart elasticsearch.service

# Tail journal
sudo journalctl -f --unit elasticsearch

# Test
curl http://localhost:9200/

Important configuration files

  1. /etc/elasticsearch/elasticsearch.yml. What to edit:

    cluster.name: “es-docker-cluster”
    network.host: 0.0.0.0
    discovery.type: single-node
    # WARNING: mlockall might cause the JVM or shell session to exit if it tries to allocate more memory than is available!
    # bootstrap.memory_lock: ‘true’
    xpack.license.self_generated.type: basic
    xpack.security.enabled: true
  2. /etc/default/elasticsearch. What to edit:

    ES_JAVA_OPTS=”-Xms256m -Xmx256m”

Important folders

  1. /var/lib/elasticsearch (this should be contents of data01/ folder, owned by elasticsearch:elasticsearch)

Configuring memory usage

Edit /etc/default/elasticsearch -> ES_JAVA_OPTS=”-Xms256m -Xmx256″

sudo systemctl restart elasticsearch.service

Kibana

Reference: https://www.elastic.co/guide/en/kibana/current/deb.html

# This should have been done before, when installing elasticsearch
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo apt-get -y install apt-transport-https
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
sudo apt-get update

# Just install kibana
sudo apt-get -y install kibana

Configure Kibana

  • /etc/kibana/kibana.yml:
server.name: es03-sg.lovia.life
server.host: 0.0.0.0
server.basePath: /app/kibana
server.rewriteBasePath: true
server.publicBaseUrl: https://es03-sg.lovia.life:9243/app/kibana
elasticsearch.hosts: ['http://es01:9200']
# for v8.x, will change to kibana_system
elasticsearch.username: kibana
elasticsearch.password: **************
monitoring.cluster_alerts.email_notifications.email_address: [email protected]
# Useful for diagnostics
# logging.verbose: true
# Kibana to start automatically
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service
# Restart Kibana
sudo systemctl restart kibana.service

# Tail journal (not all logs are here)
sudo journalctl -f -u elasticsearch -u kibana
# Tail kibana.log (detailed logs here)
sudo tail -F /var/log/kibana/kibana.log

# Test localhost binding
curl -v http://localhost:5601/app/kibana/
# Test external IP binding
curl -v http://IP_ADDRESS:5601/app/kibana/

Application Load Balancer (ALB) Configuration

For elasticsearch service’s health check:

  • Path: /
  • Successful status code is 200,401.

For kibana service’s health check:

  • Path: /app/kibana/login

Troubleshooting

Problem: Authentication of [kibana_system] was terminated by realm [reserved] – failed to authenticate user [kibana_system]

Diagnosis: Make sure Kibana can authenticate.

Set passwords for built-in users.

Reference: https://discuss.elastic.co/t/x-pack-kibana-failed-to-authenticate-user/117193/4

Solution: Do not use kibana_system user before v8.0.

Upgrading Elasticsearch & Kibana

For minor versions, you can use usual apt update and apt upgrade.

For major versions, check official upgrade docs.

Backup & Restore Elasticsearch Cluster (deb packages)

Backup the /var/lib/elasticsearch/ folder. (or the entire Lightsail instance)

For configuration folders:

  • /etc/elasticsearch
  • /etc/kibana
  • /etc/default/elasticsearch

Regenerate / Start A New Basic License

If you get this error/warning/notice in Elasticsearch logs:

blocking [indices:monitor/stats] operation due to expired license. Cluster health, cluster stats and indices stats \noperations are blocked on license expiration. All data operations (read and write) continue to work. \nIf you have a new license, please update it. Otherwise, please reach out to your support contact.

Solution:

Make sure in Docker Compose YML, license is basic and not trial:

      - xpack.license.self_generated.type=basic

TimV in this Forum thread:

# HTTP:
# curl -uelastic -XPOST 'http://localhost:9200/_xpack/license/start_basic'
# We use HTTPS
curl -uelastic -XPOST 'https://es01-sg.lovia.life:9243/_xpack/license/start_basic?acknowledge=true'

Automatic S3 Backups

TODO

Installing Elasticsearch & Kibana using Docker (did not work with < 2 GB RAM)

It seems that 1 GB struggles when using Docker because of Docker’s overhead, but 2 GB is OK. But 1 GB with Debian package seems to work OK.

Tried to use swapfile but even just 512 MB caused high CPU:

sudo fallocate -l 512MB /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

sudo nano /etc/fstab
/swapfile swap swap defaults 0 0

# Check
sudo swapon --show

Previously I tried to run with 1 GB by using Docker memory limits but this caused high CPU and hanging:

  • Ensuring JVM options is +XX:UseContainerSupport
  • Setting es01’s mem_limit to 768m
  • Setting kib01’s mem_limit to 384m

After launching the VM, to prepare Ubuntu, see Ubuntu VM.

Docker Host Kernel Configuration

Why is this necessary? Docker production pre-requisites.
Edit /etc/sysctl.d/60-vm.conf

sudo nano /etc/sysctl.d/60-vm.conf
vm.max_map_count=262144
# Ctrl+X to exit nano

sudo service procps start
sudo sysctl -w vm.max_map_count=262144

Check:

sysctl vm.max_map_count

Install Docker.io

Reference: https://docs.docker.com/engine/install/ubuntu/

sudo apt -y install apt-transport-https ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
  "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt -y install docker-ce docker-ce-cli containerd.io

# add ubuntu user to docker group
sudo adduser ubuntu docker

Now log out and log in again so docker group permissions take effect.

Install Docker Compose

The recommended way is indeed to install & upgrade as Docker Compose documentation, not from Ubuntu official repositories.

# change the version to latest
sudo curl -L https://github.com/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version

Troubleshooting (Docker Compose approach)

exited with code 137. This is due to out of memory error.

Make sure docker-compose.yml does not use bootstrap.memory_lock: ‘true’.

Make sure JVM options specifies explicit -Xms and -Xmx that is low compared to available RAM, but sufficient for each container.

Check Docker RAM usage:

docker stats --all

Upgrading Elasticsearch & Kibana on Docker

For major versions, check official upgrade docs. For minor versions, you can do (as suggested in this forum thread):

  1. Change .env‘s VERSION directly to latest minor version, with latest patch version
  2. Docker Compose up:
    docker-compose -f elastic-docker-tls.yml up -d
  3. Check logs
    docker-compose -f elastic-docker-tls.yml logs -f

Backup & Restore Elasticsearch Cluster using Docker

As we use Docker Compose, we can simply backup the data01/ folder. (or the entire Lightsail instance)

How can we help?