ID | Technique | Tactic |
---|---|---|
T1204 | User Execution | Execution |
Detection: Kubernetes Pod Created in Default Namespace
Description
The following analytic detects the creation of Kubernetes pods in the default, kube-system, or kube-public namespaces. It leverages Kubernetes audit logs to identify pod creation events within these specific namespaces. This activity is significant for a SOC as it may indicate an attacker attempting to hide their presence or evade defenses. Unauthorized pod creation in these namespaces can suggest a successful cluster breach, potentially leading to privilege escalation, persistent access, or further malicious activities within the cluster.
Search
1`kube_audit` objectRef.resource=pods verb=create objectRef.namespace IN ("default", "kube-system", "kube-public")
2| fillnull
3| stats count by objectRef.name objectRef.namespace objectRef.resource requestReceivedTimestamp requestURI responseStatus.code sourceIPs{} stage user.groups{} user.uid user.username userAgent verb
4| rename sourceIPs{} as src_ip, user.username as user
5| `kubernetes_pod_created_in_default_namespace_filter`
Data Source
Name | Platform | Sourcetype | Source | Supported App |
---|---|---|---|---|
Kubernetes Audit | Kubernetes | '_json' |
'kubernetes' |
N/A |
Macros Used
Name | Value |
---|---|
kube_audit | source="kubernetes" |
kubernetes_pod_created_in_default_namespace_filter | search * |
kubernetes_pod_created_in_default_namespace_filter
is an empty macro by default. It allows the user to filter out any results (false positives) without editing the SPL.
Annotations
Default Configuration
This detection is configured by default in Splunk Enterprise Security to run with the following settings:
Setting | Value |
---|---|
Disabled | true |
Cron Schedule | 0 * * * * |
Earliest Time | -70m@m |
Latest Time | -10m@m |
Schedule Window | auto |
Creates Risk Event | True |
Implementation
The detection is based on data that originates from Kubernetes Audit logs. Ensure that audit logging is enabled in your Kubernetes cluster. Kubernetes audit logs provide a record of the requests made to the Kubernetes API server, which is crucial for monitoring and detecting suspicious activities. Configure the audit policy in Kubernetes to determine what kind of activities are logged. This is done by creating an Audit Policy and providing it to the API server. Use the Splunk OpenTelemetry Collector for Kubernetes to collect the logs. This doc will describe how to collect the audit log file https://github.com/signalfx/splunk-otel-collector-chart/blob/main/docs/migration-from-sck.md. When you want to use this detection with AWS EKS, you need to enable EKS control plane logging https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html. Then you can collect the logs from Cloudwatch using the AWS TA https://splunk.github.io/splunk-add-on-for-amazon-web-services/CloudWatchLogs/.
Known False Positives
unknown
Associated Analytic Story
Risk Based Analytics (RBA)
Risk Message | Risk Score | Impact | Confidence |
---|---|---|---|
Kubernetes Pod Created in Default Namespace by $user$ | 49 | 70 | 70 |
References
Detection Testing
Test Type | Status | Dataset | Source | Sourcetype |
---|---|---|---|---|
Validation | ✅ Passing | N/A | N/A | N/A |
Unit | ✅ Passing | Dataset | kubernetes |
_json |
Integration | ✅ Passing | Dataset | kubernetes |
_json |
Replay any dataset to Splunk Enterprise by using our replay.py
tool or the UI.
Alternatively you can replay a dataset into a Splunk Attack Range
Source: GitHub | Version: 2