Engineering blog from Studio Drydock.

Filebeat and AWS Elasticsearch

First published 12 May 2019

Elasticsearch, Logstash and Kibana (or ELK) are standard tools for aggregating and monitoring server logs. This post details the steps I took to integrate Filebeat (the Elasticsearch log scraper) with an AWS-managed Elasticsearch instance operating within the AWS free tier.


My goal was to have Elasticsearch aggregating my server logs with a Kibana front-end for monitoring and searching. I had already set up and played with AWS Cloudwatch, but found it cumbersome and slow – adding metrics to the dashboard or exploring anomolies was painful at best.

There are three options for running Elasticsearch and Kibana:

This last option was the only reasonable option for me as I did not want to spend any money, and the AWS service fits within their free tier. (The Elastic Cloud offering provides only a 14 day trial, after which it's a minimim $16/month, far too much for a toy project).

Note that Logstash is not part of my solution; it is not included in AWS Elasticsearch and would need to be deployed separately. Given that I only need trivial parsing of log fields before ingestion, the Filebeat modules are sufficient.

Summary of issues

These are the issues I ran into while setting this up, due to obscure or missing documentation, or incompatibilities between AWS Elasticsearch and Filebeat:

I found guidance on all but the last of these eventually by searching support forums, and the last I just hacked away until it worked.


Here's my full process for setting this up. I started with an existing EC2 instance an a VPC running Ubuntu and Apache.

AWS Elasticsearch

Before setting up an Elasticsearch instance (or “domain”), create an EC2 security group that the Elasticsearch instance can use to allow ingress from other instances in the VPC. Add an ingress rule from each EC2 instance that will be providing log data into port 443.

Now just create the Elasticsearch domain choosing the default values. I selected a t2.small.elasticsearch instance in order to fit within the free tier. Ensure that the instance is VPC only, not public, and select the previously-created security group. For the access policy, select the “Do not require signing request with IAM credential” template (as Filebeat does not support signing requests AFAICT; this is the reason why the instance must be protected by the VPC).


Follow the directions to install Filebeat, ensuring that you use the OSS-licensed version. Initially I had installed the default Elastic-licensed version, but this cannot authenticate with AWS Elasticsearch.

Edit /etc/filebeat/filebeat.yml to set up both the Elasticsearch and Kibana URLs (these are shown on the AWS Elasticsearch dashboard). In both cases you will need to modify the URL to give it an explicit port of 443.

Now let Filebeat set up its indexes and dashboards with

sudo filebeat setup --pipelines --template --dashboards

By explicitly providing the --pipelines --template --dashboards arguments we are ommitting the --machine-learning option that is implied by default, and causes an error when used with AWS Elasticsearch.

Start Filebeat and then watch the systemd log for errors:

sudo filebeat test output
sudo service filebeat start
journalctl -f

(Press Ctrl+C to stop watching the log).

Apache logs

The Filebeat Apache module provides the necessary logic for scraping error and access logs from the web server; however it depends on some plugins to be installed in Elasticsearch – something that isn't possible with AWS Elasticsearch.

Thankfully the Filebeat modules are just a collection of YAML and JSON files that are easily modified. I copied the existing /usr/share/filebeat/module/apache directory to /usr/share/filebeat/module/apache-aws, and then edited the files inside to remove any use of the geoip or useragent modules.

To enable this new module you've created, copy /etc/filebeat/modules.d/apache.yml.disabled to /etc/filebeat/modules.d/apache-aws.yml.disabled and edit the content to point to your new module. Then enable it, restart filebeat and check for errors:

sudo service filebeat stop
sudo filebeat module enable apache-aws
sudo service filebeat start
journalctl -f


We've now got Apache logs being read by Filebeat and ingested into Elasticsearch; time to look at them in Kibana. Because the AWS Elasticsearch instance is running in a VPC, your web browser has no access to it.

There are three possible solutions:

I did the last of these as it's quite simple and only a minor inconvenience. Create an entry in your ~/.ssh/config (on your desktop) along the lines of

Host kibana
    User ubuntu
    LocalForward 9200

Then open the tunnel with (on your desktop):

ssh kibana -N

And connect to https://localhost:9200 in your browser.