How to Setup a Basic ELK Stack on Arch linux

ELK (or Elasticsearch, Logstash, Kibana) is a set of three technologies by elastic that can be combined to collect and visualize log data. Think of it as rsyslog on steroids and with pretty colors. Ryan and Bailey have both implemented ELK into our personal infrastructures recently, and while it’s pretty simple once your know what you’re doing, starting out can be confusing.


So this article will be the first in a series covering the basics of ELK, beats, and visualizing data.


First off, this guide is written using examples from an Arch Linux host. Theoretically, you should be able to follow along on any other distro though. That said, on Arch we recommend: implementing the suggestions from this article on using Arch as a server and using an AUR helper (yay is the go-to choice of the Angry Sysadmins).


Install ElasticSearch

elasticsearch was added to Arch’s official repositories, so installing it is as easy as:

pacman -S elasticsearch


Before starting it, we need to make a few edits to its configuration file.


First, enable cross origin access

echo 'http.cors.allow-origin: "/.*/"' >> /etc/elasticsearch/elasticsearch.yml 
echo 'http.cors.enabled: true' >> /etc/elasticsearch/elasticsearch.yml

We also need to make it accessible on the local network

echo 'network.bind_host:' >> /etc/elasticsearch/elasticsearch.yml
echo 'node.master: true' >> /etc/elasticsearch/elasticsearch.yml
echo ' true' >> /etc/elasticsearch/elasticsearch.yml
echo ' localhost' >> /etc/elasticsearch/elasticsearch.yml
echo 'transport.tcp.port: 9300' >> /etc/elasticsearch/elasticsearch.yml


The network.bind_host: will allow elasticsearch to accept connections on any of the server’s IP addresses. If you want it to only listen on one, then replace with the desired IP address.


Next, need to edit the Java VM properties and give the JVM more memory.

nano /etc/elasticsearch/jvm.options


Then edit the -Xms (starting memory) and -Xmx (maximum memory) values to 2G or greater (in the example it is 6GB). Making these values different is allowed, but can lead to weird behavior. Consider yourself warned.



Finally, start and enable elasticsearch

systemctl start elasticsearch.service
systemctl enable elasticsearch.service

Install Logstash

Like elasticsearch, logstash is in the official Arch repos and can be easily installed with pacman:

pacman -S logstash


Now we need to edit the logstash config to allow logs in to the machine. Any .conf files in /etc/logstash/conf.d/ will be automatically loaded when the service starts. Since we are only setting up basic rules for now, we’ll just make a file called logstash-simple.conf.

nano /etc/logstash/conf.d/logstash-simple.conf


Add the following:

input {
  file {
    path => "/var/log/faillog" 
    start_position => beginning

  # network syslog input
  syslog {
    host => "" 
    port => 514

beats {
port => 5044
} } output { elasticsearch { host => localhost } }


Now start and enable logstash

systemctl start logstash.service
systemctl enable logstash.service


Install Kibana

The last part of the stack to install and setup is Kibana, which, once more, can easily be installed using pacman.

pacman -S kibana


Next, we need to edit Kibana to allow connections inbound

nano /etc/kibana/kibana.yml


Uncomment and set it to or the specific IP address that you want Kibana to listen on. EX: ""


Finally, start and enable Kibana

sudo systemctl start kibana.service
sudo systemctl enable kibana.service


Install Nginx as a Reverse Proxy

You should be able to bring up the Kibana interface by going to your server at http://x.x.x.x:5601. In order to access it on port 80, we need to setup Nginx as a reverse proxy. This also allows for better security and, if your Kibana instance is public facing, easy configuration of SSL.


The package can be installed with:

pacman -S nginx


Next, we need apache-tools for htpasswd. We can install it from the AUR.

yay -S apache-tools


Now, we need to edit the Nginx config. The way Arch’s nginx package handles sites by default is a massive single configuration, rather than one file per site. So, open the config and remove the “server” part, which we will  replace with our own.

nano /etc/nginx/nginx.conf


Add this

# Nginx proxy for Elasticsearch + Kibana

server {
    listen                80;
    server_name           localhost;
    access_log            /var/log/nginx-logstash.log;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/kibana/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;        


Next, we need to generate an htpasswd file for basic authentication with the reverse proxy where username is your desired username and password is your desired password.

sudo htpasswd -c -b /etc/kibana/htpasswd.users username password


Finally we start and enable Nginx

sudo systemctl start nginx.service
sudo systemctl enable nginx.service


To make sure that everything is working, go to your ELK server’s IP address in a web browser and you should be prompted for a username and password. Once you have signed in, you should see the Kibana web interface.



Congrats! You setup an ELK stack. Currently it’s just a kinda pretty but useless website taunting you with a lack of logs though, so in the next article we will be covering beats, which are used to send logs and other data to Kibana for visualization. You can read more about beats here.


Once the article on beats is finished, we will update this article with a link to it. You can also use the form in the sidebar to sign up for an email notification whenever we publish a new article.

About: Bailey Kasin

I build virtual environments and challenges for Cybersecurity students to complete as a way to gain experience before graduating and entering the workforce.

Leave a Reply

Your email address will not be published. Required fields are marked *