How to Install Elastic Stack on Ubuntu 18.04 LTS
Elasticsearch is an open source search engine based on Lucene, developed in Java. It provides a distributed and multitenant full-text search engine with an HTTP Dashboard web-interface (Kibana). The data is queried, retrieved and stored in a JSON document scheme. Elasticsearch is a scalable search engine that can be used to search for all kind of text documents, including log files. Elasticsearch is the heart of the ‘Elastic Stack’ or ELK Stack.
Logstash is an open source tool for managing events and logs. It provides real-time pipelining for data collections. Logstash will collect your log data, convert the data into JSON documents, and store them in Elasticsearch.
Kibana is an open source data visualization tool for Elasticsearch. Kibana provides a pretty dashboard web interface. It allows you to manage and visualize data from Elasticsearch. It’s not just beautiful, but also powerful.
In this tutorial, I will show you how to install and configure Elastic Stack on an Ubuntu 18.04 server for monitoring of server logs. Then I’ll show you how to install and configure ‘Elastic beats’ on an Ubuntu 18.04 and a CentOS 7 client server.
Prerequisites
- 3 Servers
- Ubuntu 18.04 with 4GB Ram/memory as ‘elk-master’ – 10.0.15.10
- Ubuntu 18.04 with 512MB/1GB Ram/Memory as ‘elk-client01’ – 10.0.15.21
- CentOS 7.5 with 512MB/1GB Ram/Memory as ‘elk-client02’ – 10.0.15.22
- Root privileges
What we will do?
- Install Elastic Stack
- Install Java
- Install and Configure ElasticSearch
- Install and Configure Kibana
- Install and Configure Nginx as Reverse Proxy for Kibana
- Install and Configure Logstash
- Install and Configure Filebeat on Ubuntu 18.04
- Install and Configure Filebeat on CentOS 7.5
- Testing
Step 1 – Install Elastic Stack
In this first step, we will install and configure the ‘Elastic Stack’ on the ‘elk-master’ server, so run all commands and stages for this step on the ‘elk-master’ server only. We will install and configure each component of the elastic stack, including Elasticsearch, Logstash shipper, and Kibana Dashboard with Nginx web server.
Install Java
Java is required for the Elastic stack deployment. Elasticsearch requires Java 8. It is recommended to use the Oracle JDK 1.8, and we will install Java 8 from a PPA repository.
Install the ‘software-properties-common’ and ‘apt-transport-https’ packages, and then add the PPA ‘webupd8team’ Java repository. Run the ‘apt install’ and ‘add-apt-repository’ command below.
sudo apt install software-properties-common apt-transport-https -y
sudo add-apt-repository ppa:webupd8team/java -y
Now install the java8-installer.
sudo apt install oracle-java8-installer -y
After the installation is complete, check the java version.
java -version
Java 1.8 installed on the system.
Next, we will configure the java environment. Check the java binary file using the command below.
update-alternatives –config java
And you will get the java binary file on the ‘/usr/lib/jvm/java-8-oracle‘ directory.
Now create the profile file ‘java.sh’ under the ‘profile.d’ directory.
vim /etc/profile.d/java.sh
Paste java environment configuration below.
#Set JAVA_HOME JAVA_HOME="/usr/lib/jvm/java-8-oracle" export JAVA_HOME PATH=$PATH:$JAVA_HOME export PATH
Save and exit.
Make the file executable and load the configuration file.
chmod +x /etc/profile.d/java.sh
source /etc/profile.d/java.sh
Now check the java environment using the command below.
echo $JAVA_HOME
And you will get the java directory is located at ‘/usr/lib/jvm/java-8-oracle‘ directory.
Install Elasticsearch
After installing Java, we will install the first component of the Elastic Stack, we will install the elasticsearch.
Add the elastic stack key and add the elastic repository to the system.
wget -qO – https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb https://artifacts.elastic.co/packages/6.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
Now update the repository and install the elasticsearch package using the command below.
sudo apt update
sudo apt install elasticsearch -y
After the installation is complete, go to the ‘/etc/elasticsearch’ directory and edit the configuration file ‘elasticsearch.yml’.
cd /etc/elasticsearch/
vim elasticsearch.yml
Uncomment the ‘network.host’ line and change the value to ‘localhost’, and uncomment the ‘http.port’ line for the elasticsearch port configuration.
network.host: localhost http.port: 9200
Save and exit.
Now start the elasticsearch service and enable it to launch every time on system boot.
systemctl start elasticsearch
systemctl enable elasticsearch
The elasticsearch is now up and running, check it using netstat command netstat and curl commands below.
netstat -plntu
curl -XGET ‘localhost:9200/?pretty’
Now you will get the elasticsearch version ‘6.2.4’ is running on the default port ‘9200’.
The elasticsearch installation has been completed.
Install and Configure Kibana Dashboard
The second component is a kibana Dashboard. We will install the Kibana dashboard from the elastic repository, and configure the kibana service to run on the localhost address.
Install Kibana dashboard using the apt command below.
sudo apt install kibana -y
Now go to the ‘/etc/kibana’ directory and edit the configuration file ‘kibana.yml’.
cd /etc/kibana/
vim kibana.yml
Uncomment those lines ‘server.port’, ‘server.host’, and ‘elasticsearch.url’.
server.port: 5601 server.host: "localhost" elasticsearch.url: "http://localhost:9200"
Save and exit.
Now start the kibana service and enable it to launch everytime at system boot.
sudo systemctl enable kibana
sudo systemctl start kibana
The kibana dashboard is now up and running on the ‘localhost’ address and the default port ‘5601’. Check it using netstat command below.
netstat -plntu
Kibana dashboard installation has been completed.
Install and Configure Nginx as Reverse-Proxy for Kibana
In this tutorial, we will be using the Nginx web server as a reverse proxy for the Kibana Dashboard.
Install Nginx and the ‘apache2-utils’ packages to the system.
sudo apt install nginx apache2-utils -y
After the installation is complete, go to the ‘/etc/nginx’ configuration directory and create new virtual host file named ‘kibana’.
cd /etc/nginx/
vim sites-available/kibana
Paste Nginx virtual host configuration below.
server { listen 80; server_name elastic-stack.io; auth_basic "Restricted Access"; auth_basic_user_file /etc/nginx/.kibana-user; location / { proxy_pass http://localhost:5601; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }
Save and exit.
Next, we will create new basic authentication web server for accessing the Kibana dashboard. We will create the basic authentication using the htpasswd command as below.
sudo htpasswd -c /etc/nginx/.kibana-user elastic
Type the elastic user password
Activate the kibana virtual host and test all nginx configuration.
ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/
nginx -t
Make sure there is no error, now start the Nginx service and enable it to launch everytime at system boot.
systemctl enable nginx
systemctl restart nginx
Nginx installation and configuration as a Reverse-proxy for the Kibana dashboard have been completed.
Install and Configure Logstash
The last component for the Elastic Stack for this guide is the ‘Logstash’. We will install and configure Logsatash to centralize server logs from client sources with filebeat, then filter and transform all data (Syslog) and transport it to the stash (Elasticsearch).
Before installing logstash, make sure you check the OpenSSL Version your server.
openssl version -a
For this guide, we will be using the OpenSSL ‘1.0.2o’. If you’re still using the OpenSSL version 1.1.2, you will get an error at the logstash and filebeat SSL connection.
Install logstash using the apt command below.
sudo apt install logstash -y
After the installation is complete, we will generate the SSL certificate key to secure the log data transfer from the client filebeat to the logstash server.
Edit the ‘/etc/hosts’ file using vim.
vim /etc/hosts
Add the configuration below.
10.0.15.10 elk-master elk-master
Save and exit.
Now create new SSL directory under the logstash configuration directory ‘/etc/logstash’ and go to that directory.
mkdir -p /etc/logstash/ssl
cd /etc/logstash/
Generate the SSL certificate for Logstash using the openssl command as below.
openssl req -subj ‘/CN=elk-master/’ -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout ssl/logstash-forwarder.key -out ssl/logstash-forwarder.crt
The SSL certificate files for Logstash has been created on the ‘/etc/logstash/ssl’ directory.
Next, we will create new configuration files for logstash. We will create a configuration file ‘filebeat-input.conf’ as input file from filebeat, ‘syslog-filter.conf’ for syslog processing, and then a ‘output-elasticsearch.conf’ file to define the Elasticsearch output.
Go to the logstash configuration directory and create the new configuration files ‘filebeat-input.conf’ in the ‘conf.d’ directory.
cd /etc/logstash/
vim conf.d/filebeat-input.conf
Paste the following configuration there.
input { beats { port => 5443 type => syslog ssl => true ssl_certificate => "/etc/logstash/ssl/logstash-forwarder.crt" ssl_key => "/etc/logstash/ssl/logstash-forwarder.key" } }
Save and exit.
For the syslog processing log data, we are using the filter plugin named ‘grok’ to parse the syslog files.
Create a new configuration ‘syslog-filter.conf’.
vim conf.d/syslog-filter.conf
Paste the following configuration there.
filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } }
Save and exit.
And for the elasticsearch output, we will create the configuration file named ‘output-elasticsearch.conf’.
vim conf.d/output-elasticsearch.conf
Paste the following configuration there.
output { elasticsearch { hosts => ["localhost:9200"] hosts => "localhost:9200" manage_template => false index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } }
Save and exit.
When this is done, start the logstash service and enable it to launch everytime at system boot.
sudo systemctl enable logstash
sudo systemctl start logstash
Check the logstash service using netstat and systemctl commands below.
netstat -plntu
systemctl status logstash
And the logstash service is now up and running. Running on the public IP address with port ‘5443’.
The Elastic Stack installation has been completed.
Step 2 – Install and Configure Filebeat on Ubuntu 18.04
In this step, we will configure the Ubuntu 18.04 client ‘elk-client01’ by installing the Elastic Beats data shippers ‘Filebeat’ on it.
Before installing the filebeat to the system, we need to edit the ‘/etc/hosts’ and download the logstash certificate file ‘logstash-forwarder.crt’ file to the ‘elk-client01’ server.
Edit the ‘/etc/hosts’ file using vim editor.
vim /etc/hosts
Paste the following configuration there.
10.0.15.10 elk-master elk-master
Save and exit.
Copy the logstash certificate file ‘logstash-forwarder.crt’ using scp command.
scp [email protected]:/etc/logstash/ssl/logstash-forwarder.crt .
Next, install the Elastic Beats ‘Filebeat’ by adding the elastic key and add the elastic repository.
wget -qO – https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb https://artifacts.elastic.co/packages/6.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
Update the repository and install the ‘filebeat’ package using the apt command below.
sudo apt update
sudo apt install filebeat -y
After the installation is complete, go to the ‘/etc/filebeat’ directory and edit the configuration file ‘filebeat.yml’.
cd /etc/filebeat/
vim filebeat.yml
Now enable the filebeat prospectors by changing the ‘enabled’ line value to ‘true’.
enabled: true
Define system log files to be sent to the logstash server. For this guide, we will add the ssh log file ‘auth.log’ and the syslog file.
paths: - /var/log/auth.log - /var/log/syslog
Setup the output to logstash by commenting the default ‘elasticsearch’ output and uncomment the logstash output line as below.
output.logstash: # The Logstash hosts hosts: ["elk-master:5443"] ssl.certificate_authorities: ["/etc/filebeat/logstash-forwarder.crt"]
Save and exit.
Next, we need to edit the ‘filebeat.reference.yml’ file to enable filebeat modules, and we will enable the ‘syslog’ module.
vim filebeat.reference.yml
Enable the syslog system module for filebeat as below.
- module: system # Syslog syslog: enabled: true
Save and exit.
Copy the logstash certificate file ‘logstash-forwarder.crt’ to the ‘/etc/filebeat’ directory.
cp ~/logstash-forwarder.crt /etc/filebeat/logstash-forwarder.crt
Filebeat installation and configuration have been completed. Now start the filebeat service and enable it to launch every time at system boot.
systemctl start filebeat
systemctl enable filebeat
Check the filebeat service using commands below.
systemctl status filebeat
tail -f /var/log/filebeat/filebeat
The filebeat shippers are up and running under the Ubuntu 18.04 server.
Step 3 – Install and Configure Filebeat on CentOS 7.5
In this step, we will configure the CentOS 7.5 client ‘elk-client02’ by installing the Elastic Beats data shippers ‘Filebeat’ on it.
Before installing the Filebeat to the system, we need to edit the ‘/etc/hosts’ and download the logstash certificate file ‘logstash-forwarder.crt’ file to the ‘elk-client02’ server.
Edit the ‘/etc/hosts’ file using vim.
vim /etc/hosts
Paste configuration below.
10.0.15.10 elk-master elk-master
Save and exit.
Copy the logstash certificate file ‘logstash-forwarder.crt’ using scp command.
scp [email protected]:/etc/logstash/ssl/logstash-forwarder.crt .
Next, install the Elastic Beats ‘Filebeat’ by adding the elastic key and add the elastic repository.
rpm –import https://artifacts.elastic.co/GPG-KEY-elasticsearch
cat <<EOF > /etc/yum.repos.d/elastic.repo
[elasticsearch-6.x]name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
Install filebeat using the yum command below.
yum install filebeat -y
After the installation is complete, go to the ‘/etc/filebeat’ directory and edit the configuration file ‘filebeat.yml’.
cd /etc/filebeat/
vim filebeat.yml
Now enable the filebeat prospectors by change the ‘enabled’ line value to ‘true’.
enabled: true
Define system log files to be sent to the logstash server. For this guide, we will add the ssh log file ‘auth.log’ and the syslog file.
paths: - /var/log/secure - /var/log/messages
Setup the output to logstash by commenting the default ‘elasticsearch’ output and uncomment the logstash output line as below.
output.logstash: # The Logstash hosts hosts: ["elk-master:5443"] ssl.certificate_authorities: ["/etc/filebeat/logstash-forwarder.crt"]
Save and exit.
Next, we need to edit the ‘filebeat.reference.yml’ file to enable filebeat modules, and we will enable the ‘syslog’ module.
vim filebeat.reference.yml
Enable the syslog system module for filebeat as below.
- module: system # Syslog syslog: enabled: true
Save and exit.
Copy the logstash certificate file ‘logstash-forwarder.crt’ to the ‘/etc/filebeat’ directory.
cp ~/logstash-forwarder.crt /etc/filebeat/logstash-forwarder.crt
Filebeat installation and configuration have been completed. Now start the filebeat service and add it to the boot time.
systemctl start filebeat
systemctl enable filebeat
Check the filebeat service using commands below.
systemctl status filebeat
tail -f /var/log/filebeat/filebeat
The filebeat shippers are up and running under the CentOS 7.5 server.
Step 4 – Testing
Open your web browser and type the elastic stack domain name, mine is: ‘elastic-stack.io’.
You will be prompted the username and password from the basic authentication to the Kibana Dashboard.
Type the username ‘elastic’ with your password.
Now you will get the beautiful kibana dashboard, click the ‘Set up index patterns’ button on the right.
Define the ‘filebeat-*’ index pattern and click the ‘Next step’ button.
For the ‘time filter field name’, choose the ‘@timestamp’ and click ‘Create index pattern’.
And the filebeat index pattern has been created.
Next, we will try to get the log information for the SSH login failed on each client servers ‘elk-client01’ Ubuntu system and ‘elk-client02’ CentOS system.
Inside the Kibana Dashboard, click the ‘Discover’ menu to get all server logs.
Set the ‘beat.hostname’ to the ‘elk-client01’ server, the ‘source’ is the ‘/var/log/auth.log’ file, and you will get the result as shown below.
And following is the sample log details for SSH failed password from the ‘auth.log’ file.
For the ‘elk-client02’ CentOS server, set the ‘beat.hostname’ to the ‘elk-client02’ server, the ‘source’ is the ‘/var/log/secure’ file, and you will get the result as shown below.
And following is the sample log details for SSH failed password from the ‘secure’ file.
The Elastic Stack and the Elastic Beat ‘Filebeat’ installation and configuration have been completed successfully.