ID | Technique | Tactic |
---|---|---|
T1204 | User Execution | Execution |
Detection: Kubernetes Create or Update Privileged Pod
Description
The following analytic detects the creation or update of privileged pods in Kubernetes. It identifies this activity by monitoring Kubernetes Audit logs for pod configurations that include root privileges. This behavior is significant for a SOC as it could indicate an attempt to escalate privileges, exploit the kernel, and gain full access to the host's namespace and devices. If confirmed malicious, this activity could lead to unauthorized access to sensitive information, data breaches, and service disruptions, posing a severe threat to the environment.
Search
1`kube_audit` objectRef.resource=pods verb=create OR verb=update requestObject.metadata.annotations.kubectl.kubernetes.io/last-applied-configuration=*\"privileged\":true*
2| fillnull
3| stats count values(user.groups{}) as user_groups by kind objectRef.name objectRef.namespace objectRef.resource requestObject.kind responseStatus.code sourceIPs{} stage user.username userAgent verb requestObject.metadata.annotations.kubectl.kubernetes.io/last-applied-configuration
4| rename sourceIPs{} as src_ip, user.username as user
5| `kubernetes_create_or_update_privileged_pod_filter`
Data Source
Name | Platform | Sourcetype | Source | Supported App |
---|---|---|---|---|
Kubernetes Audit | Kubernetes | '_json' |
'kubernetes' |
N/A |
Macros Used
Name | Value |
---|---|
kube_audit | source="kubernetes" |
kubernetes_create_or_update_privileged_pod_filter | search * |
kubernetes_create_or_update_privileged_pod_filter
is an empty macro by default. It allows the user to filter out any results (false positives) without editing the SPL.
Annotations
Default Configuration
This detection is configured by default in Splunk Enterprise Security to run with the following settings:
Setting | Value |
---|---|
Disabled | true |
Cron Schedule | 0 * * * * |
Earliest Time | -70m@m |
Latest Time | -10m@m |
Schedule Window | auto |
Creates Risk Event | True |
Implementation
The detection is based on data that originates from Kubernetes Audit logs. Ensure that audit logging is enabled in your Kubernetes cluster. Kubernetes audit logs provide a record of the requests made to the Kubernetes API server, which is crucial for monitoring and detecting suspicious activities. Configure the audit policy in Kubernetes to determine what kind of activities are logged. This is done by creating an Audit Policy and providing it to the API server. Use the Splunk OpenTelemetry Collector for Kubernetes to collect the logs. This doc will describe how to collect the audit log file https://github.com/signalfx/splunk-otel-collector-chart/blob/main/docs/migration-from-sck.md. When you want to use this detection with AWS EKS, you need to enable EKS control plane logging https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html. Then you can collect the logs from Cloudwatch using the AWS TA https://splunk.github.io/splunk-add-on-for-amazon-web-services/CloudWatchLogs/.
Known False Positives
unknown
Associated Analytic Story
Risk Based Analytics (RBA)
Risk Message | Risk Score | Impact | Confidence |
---|---|---|---|
Kubernetes privileged pod created by user $user$. | 49 | 70 | 70 |
References
Detection Testing
Test Type | Status | Dataset | Source | Sourcetype |
---|---|---|---|---|
Validation | ✅ Passing | N/A | N/A | N/A |
Unit | ✅ Passing | Dataset | kubernetes |
_json |
Integration | ✅ Passing | Dataset | kubernetes |
_json |
Replay any dataset to Splunk Enterprise by using our replay.py
tool or the UI.
Alternatively you can replay a dataset into a Splunk Attack Range
Source: GitHub | Version: 2