Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing container and k8s details except container.id in case of least privileged falco #3256

Open
aberezovski opened this issue Jun 20, 2024 · 2 comments

Comments

@aberezovski
Copy link

aberezovski commented Jun 20, 2024

Hello Falco team,

During evaluating Falco on different managed k8s clusters, my team and I observed some unexpected behaviour.

Describe the bug

On AKS k8s cluster, generated alerts are randomly fulfilled with container and k8s information. Only container.id is listed, but details regarding container's name, image and k8s pod's name, namespace are missing.

The behaviour is not related to k8s namespace where target k8s pod is running, but most often the above mentioned details are missing for some containers deployed in kube-system namespace.

How to reproduce it

  1. Deploy least privileged Falco in AKS cluster
  2. Open a shell to a container running in any pod kube-proxy-* in k8s namespace kube-system
  3. Check the generated falco alert and notice the attributes like container.name, container.image.repository, k8s.pod.name, k8s.ns.name are set to null.
{
    "hostname": "aks-lhxynh188x-14626312-vmss000014",
    "output": "09:58:35.578767314: Notice A shell was spawned in a container with an attached terminal ...",
    ...
    "output_fields": {
        "container.id": "446c7dbbeac8",
        "container.image.repository": null,
        "container.image.tag": null,
        "container.name": null,
        "evt.arg.flags": "EXE_WRITABLE",
        "evt.time": 1718877515578767314,
        "evt.type": "execve",
        "k8s.ns.name": null,
        "k8s.pod.name": null,
        "proc.cmdline": "sh",
        "proc.exepath": "/bin/dash",
        "proc.name": "sh",
        "proc.pname": "runc",
        "proc.tty": 34816,
        "user.loginuid": -1,
        "user.name": "root",
        "user.uid": 0
    }
}
  1. Open a shell to falco container and install crictl to check if container runtime socket could provide the container and k8s related information.
  • Install curl by running apk add curl
  • Install crictl by following the instruction to install-crictl
  • Run the command crictl -r /host/run/containerd/containerd.sock inspect {container.id}, replace the {container.id} placeholder with a valid container's identifier.
  • Notice all container and k8s related information are available in the response of the command above. Check the output of crictl inspect command in evidences section below.

Expected behaviour

All container and k8s related information should be reflected in the alert generated by Falco rule to guarantee the accurate traceability of affected k8s pods and containers.

Evidences

Alert generated for an attempt to open a Shell to a running container

{
    "hostname": "aks-lhxynh188x-14626312-vmss000014",
    "output": "09:58:35.578767314: Notice A shell was spawned in a container with an attached terminal (evt_type=execve user=root user_uid=0 user_loginuid=-1 process=sh proc_exepath=/bin/dash parent=runc command=sh terminal=34816 exe_flags=EXE_WRITABLE container_id=446c7dbbeac8 container_image=<NA> container_image_tag=<NA> container_name=<NA> k8s_ns=<NA> k8s_pod_name=<NA>)",
    "priority": "Notice",
    "rule": "Terminal shell in container",
    "source": "syscall",
    "tags": [
        "T1059",
        "container",
        "maturity_stable",
        "mitre_execution",
        "shell"
    ],
    "time": "2024-06-20T09:58:35.578767314Z",
    "output_fields": {
        "container.id": "446c7dbbeac8",
        "container.image.repository": null,
        "container.image.tag": null,
        "container.name": null,
        "evt.arg.flags": "EXE_WRITABLE",
        "evt.time": 1718877515578767314,
        "evt.type": "execve",
        "k8s.ns.name": null,
        "k8s.pod.name": null,
        "proc.cmdline": "sh",
        "proc.exepath": "/bin/dash",
        "proc.name": "sh",
        "proc.pname": "runc",
        "proc.tty": 34816,
        "user.loginuid": -1,
        "user.name": "root",
        "user.uid": 0
    }
}

Alert generated for reading sensitive file /etc/shadow in a running container

{
    "hostname": "aks-lhxynh188x-14626312-vmss000014",
    "output": "10:48:02.779453149: Warning Sensitive file opened for reading by non-trusted program (file=/etc/shadow gparent=<NA> ggparent=<NA> gggparent=<NA> evt_type=openat user=root user_uid=0 user_loginuid=-1 process=cat proc_exepath=/bin/cat parent=sh command=cat /etc/shadow terminal=34816 container_id=446c7dbbeac8 container_image=<NA> container_image_tag=<NA> container_name=<NA> k8s_ns=<NA> k8s_pod_name=<NA>)",
    "priority": "Warning",
    "rule": "Read sensitive file untrusted",
    "source": "syscall",
    "tags": [
        "T1555",
        "container",
        "filesystem",
        "host",
        "maturity_stable",
        "mitre_credential_access"
    ],
    "time": "2024-06-20T10:48:02.779453149Z",
    "output_fields": {
        "container.id": "446c7dbbeac8",
        "container.image.repository": null,
        "container.image.tag": null,
        "container.name": null,
        "evt.time": 1718880482779453149,
        "evt.type": "openat",
        "fd.name": "/etc/shadow",
        "k8s.ns.name": null,
        "k8s.pod.name": null,
        "proc.aname[2]": null,
        "proc.aname[3]": null,
        "proc.aname[4]": null,
        "proc.cmdline": "cat /etc/shadow",
        "proc.exepath": "/bin/cat",
        "proc.name": "cat",
        "proc.pname": "sh",
        "proc.tty": 34816,
        "user.loginuid": -1,
        "user.name": "root",
        "user.uid": 0
    }
}

Executing the command crictl -r /host/run/containerd/containerd.sock inspect 446c7dbbeac8 inside falco container to get information about container with id 446c7dbbeac8 from mounted container runtime's socket:

  "status": {
    "id": "446c7dbbeac87f36bae78b52be827d4ac6a4012dafd4a52d94d0e47804324258",
    "metadata": {
        "attempt": 0,
        "name": "kube-proxy"
    },
    "state": "CONTAINER_RUNNING",
    "createdAt": "2024-06-18T20:05:46.673971532Z",
    "startedAt": "2024-06-18T20:05:46.70462519Z",
    "finishedAt": "0001-01-01T00:00:00Z",
    "exitCode": 0,
    "image": {
        "annotations": {},
        "image": "mcr.microsoft.com/oss/kubernetes/kube-proxy:v1.27.7-hotfix.20240411"
    },
    "imageRef": "mcr.microsoft.com/oss/kubernetes/kube-proxy@sha256:4db3d5d84030052de64a19e8678820cbf936dbaab975141e530d264598d6ab2e",
    "reason": "",
    "message": "",
    "labels": {
        "io.kubernetes.container.name": "kube-proxy",
        "io.kubernetes.pod.name": "kube-proxy-dk642",
        "io.kubernetes.pod.namespace": "kube-system",
        "io.kubernetes.pod.uid": "66e1be2e-793c-4ad3-a4a5-9516eb86902d"
    },
    ...
    }

Environment

Falco deployed on AKS cluster using Falco Helm Chart version 4.4.2.

  • Falco version:
    Falco version: 0.38.0 (x86_64)

  • System info:

{
  "machine": "x86_64",
  "nodename": "falco-8b7db",
  "release": "5.15.0-1064-azure",
  "sysname": "Linux",
  "version": "#73-Ubuntu SMP Tue Apr 30 14:24:24 UTC 2024"
}
  • Cloud provider or hardware configuration: Azure
  • OS:
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/
  • Kernel:
Linux falco-8b7db 5.15.0-1064-azure #73-Ubuntu SMP Tue Apr 30 14:24:24 UTC 2024 x86_64 GNU/Linux
  • Installation method:
    Deploy to k8s cluster as DaemonSet by using Helm Chart version 4.4.2 using the custom values YAML file
driver:
  enabled: true
  kind: modern_ebpf
  modernEbpf:
    leastPrivileged: true

tty: true

falco:
  json_output: true
  json_include_output_property: true

image:
  pullPolicy: Always
  repository: falcosecurity/falco-distroless

extra:
  args:
  - --disable-cri-async
@incertum
Copy link
Contributor

@LucaGuerra @Andreagit97 re implications of running Falco as least privileged, could you help?

@poiana
Copy link

poiana commented Sep 19, 2024

Issues go stale after 90d of inactivity.

Mark the issue as fresh with /remove-lifecycle stale.

Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Provide feedback via https://github.com/falcosecurity/community.

/lifecycle stale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants