Track privilege escalations with eBPF

Track privilege escalations with eBPF

Track privilege escalations in your Kubernetes cluster with eBPF

Introduction and risk

With so many services exposed to the public internet, they are tempting targets for hackers attempting to steal crucial information or cause service disruption. Even when you successfully prevent attacks, you often need to prove to auditors or regulators that attacks were prevented.

Sometimes, it can be an unintentional activity, such as forgotten credentials or accidental, but sometimes, especially when witnessed in quantity, it can mean a malicious attacker. If you work in any regulated industry, then maintaining records of such incidents is essential, but even if not, it’s important to retain potential attacks for the future.

This post shows you how to use Falco to detect potential risks and then log them to different Parseable log streams depending on their severity using Fluent Bit.

Where risks can occur

Risks can occur at many levels and layers of the modern cloud-native stack, from obvious to less obvious, including:

  • Users attempting to gain sudo access to an instance.

  • Sensitive files opened.

  • Too many incorrect logins.

  • Unexpected inbound and outbound connections.

  • Scheduling cron jobs.

  • Launching package managers.

Possible options for logging security threats

Security threat hunting by nature is extensive. You’d ideally want to capture everything you can - with least amount of instrumentation. This is where eBPF shines. You can capture OS level metrics, events with zero instrumentation with eBPF based systems. eBPF is now one of the widest-used options in the cloud native space.

In this post we’ll focus on Falco, a security tool that provides runtime security across hosts, containers, Kubernetes, and cloud environments.

Installing Falco

We’re using Falco deployed on Kubernetes using Helm, following the instructions in the project documentation. But if you're already using Falco and have it set up, you can follow the rest of the post and change the relevant parts.

If this is your first time using Falco, follow the "Trigger a rule" section to gain some sample data and rules to experiment with for the rest of the post.

Adding Falco Sidekick

Falco is fantastic for identifying and logging potential security issues, but by default, when it does, it keeps the log of those issues to itself. Sidekick is an additional daemon from the Falco team that connects and forwards those events to dozens of different output channels, from communication channels to logging and observability tools.

To add Sidekick, run the Helm upgrade command and set two new configuration flags: one to enable Sidekick and the other to enable the Web UI. It is useful to check that everything is working as expected, at least to begin with.

helm upgrade --namespace falco falco falcosecurity/falco --set falcosidekick.enabled=true --set falcosidekick.webui.enabled=true

If you followed the "Trigger a rule" set from the Falco documentation, a few events will likely already be visible in the UI.

Logging to Parseable

This post aims to send Falco's security events to Parseable. So, next, start Parseable with Kubernetes following the documentation.

There's one more service needed to route the Falco events to a Parseable log stream. There are a handful of ways you could do this. I opted for using Fluent Bit as Parseable and Falco support it, and it has a flexible but lightweight engine to define the inputs, outputs, and filters for processing logging and metrics data. This means you can route and filter data to different Parseable log streams.

First, add and install Fluent Bit, and then I'll cover the configuration changes to achieve the routing to Parseable:

helm repo add fluent https://fluent.github.io/helm-charts
helm install fluent-bit fluent/fluent-bit -n fluentbit --create-namespace

Since you installed Fluent Bit with Helm, you need to override the default values.yaml file, grab a copy from the Helm charts repository. You only need to change the config key from around line 375 onwards.

Fluent Bit has a LOT of configuration possibilities centred around four key concepts:

  • Service: Global properties for the Fluent Bit service.

  • Inputs: One or more definitions of input data.

  • Filters: One or more methods for filtering the input data.

  • Outputs: One or more destinations for data, filtered or not.

The output is easiest to start with. Remove the contents of the existing config > outputs section and add the following http output plugin details for Parseable:

outputs: |
    [OUTPUT]
        Name http
        Match kube.*
        host parseable.parseable.svc.cluster.local
        http_User admin
        http_Passwd admin
        format json
        port 80
        header Content-Type application/json
        header X-P-META-meta1 value1
        header X-P-TAG-tag1 value1
        header X-P-Stream falco
        uri /api/v1/ingest
        json_date_key timestamp
        json_date_format iso8601

This uses Parseable's internal Kubernetes DNS address, port, authentication details, ingestion endpoint, and the header X-P-Stream falco to send the logs to the falco stream.

Next, replace the contents of the existing config > inputs section with the following tail input plugin details:

inputs: |
    [INPUT]
        Name tail
        Path /var/log/containers/*.log
        multiline.parser docker, cri
        Tag kube.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On

This tails the log files for all containers running under Kubernetes.

Finally, replace the contents of the existing config > filters section with the following:

  filters: |
    [FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On
        Merge_Log_key log_processed
        Keep_Log Off
        K8S-Logging.Parser On
        K8S-Logging.Exclude On

    [FILTER]
        Name                grep
        Match               kube.*
        Regex              priority Warning

Filters don't just refine what data you send between inputs and outputs but also enrich the data. This is what the Kubernetes filter plugin does. It adds Kubernetes metadata to the log entries from each container, giving more analysis data to work with. The second filter uses the grep filter plugin to filter logs. As it uses regex, there are 1001 ways you could filter, but I opted for a simple option, just Warning level log messages.

Send the updated values to the cluster and start the port forwarding as the command prompt instructs you:

helm upgrade --install fluent-bit fluent/fluent-bit --values values.yaml -n fluentbit

Now trigger a few Falco rules again:

kubectl exec -it $(kubectl get pods --selector=app=nginx -o name) -- cat /etc/shadow

Reload the Parseable dashboard, and you should see the Falco events.

Routing to different log streams

So far, so good. The most critical events are finding their way into Parseable. But Parseable is great at long-term storage and analysis, so we can send and store more data but separate it so that different responsible parties can access the data that's important to them.

It's not immediately obvious how Falco manages the routing of inputs and outputs. It happens with matching Tag / Match pairs and a lot of trial and error in figuring out the fields and values you're trying to match. Here, the standard out plugin can help you debug:

outputs: |
    [OUTPUT]
        Name stdout
        Format json
        Match kube.*

For the example so far, the OUTPUT Match kube.* matches the INPUT Tag kube.*. To route two different outputs, you use the Rewrite Tag filter plugin. Replace the grep filter with the following:

    [FILTER]
        Name rewrite_tag
        Match kube.*
        Rule $log_processed['priority'] ^(.*)$ ps.$0 false
        Emitter_Name  re_emitted

This matches the kube.* tag from the input and then rewrites the priority field buried inside the log_processed field. It took me a lot of frustration and usage of the aforementioned standard plugin to figure that out! This is because the Kubernetes filter, in this case, uses Merge_Log to handle JSON strings better. If you set this to On, then it outputs the enriched log to a field named Merge_Log_key. If you disabled this feature, you would find the values in log instead.

Fluent bit always marks the top-level field with a $. You use regex to match the field's values and then capture groups of that value to populate variables. In the example above, it matches any text value (which I know to be "Notice" or "Warning") and creates a new tag with that value prefixed by ps. So, ps.Warning or ps.Notice.

Replace the existing outputs with the following:

[OUTPUT]
        Name http
        Match ps.Warning
        host parseable.parseable.svc.cluster.local
        http_User admin
        http_Passwd admin
        format json
        port 80
        header Content-Type application/json
        header X-P-META-meta1 value1
        header X-P-TAG-tag1 value1
        header X-P-Stream falcowarn
        uri /api/v1/ingest
        json_date_key timestamp
        json_date_format iso8601

    [OUTPUT]
        Name http
        Match ps.Notice
        host parseable.parseable.svc.cluster.local
        http_User admin
        http_Passwd admin
        format json
        port 80
        header Content-Type application/json
        header X-P-META-meta1 value1
        header X-P-TAG-tag1 value1
        header X-P-Stream falconotice
        uri /api/v1/ingest
        json_date_key timestamp
        json_date_format iso8601

These are much the same as the previous Parseable outputs, but now there are two log streams matched to the value of ps.*.

Update the Helm chart again:

helm upgrade --install fluent-bit fluent/fluent-bit --values values.yaml -n fluentbit

And trigger a few Falco rules:

kubectl exec -it $(kubectl get pods --selector=app=nginx -o name) -- cat /etc/shadow

Reload the Parseable dashboard, and you should see the Falco events in the two log streams.

Generate alerts based on log stream

With events separated into log streams, you can now use Parseable's alerts feature to generate messages to appropriate channels and people.

Inside the "falcowarn" log stream, click the Manage icon and in the Alerts panel, click + New Alert, and fill in the details for your alert service of choice. In the screenshot below, I use a webhook service to alert me for warnings, and I don't set any alerts for notices as I am happy to look through those when needed.

Summary

This post looked at how to use falco to detect potential security issues and violations and pass those issues to different Parseable log streams for analysis, auditing, and to generate alerts.

Currently, the filtering and routing of the issues are fairly basic. You could also change access details, servers, or metadata in Parseable log streams for more fine-grained control. You could also use different regex patterns in further filters to route logs to more log streams, with the flexibility of fluent bit and Parseable, the possibilities are endless.