This search uses built in Splunk command
| anomalydetection to detect anomalies with respect to users making high number of GetObject API calls to download objects from S3 in a 10 minute time window. The field
probable_cause is the name of the field that best explains why the event is anomalous. This command identifies anomalous events by computing a probability for each GetObject event by "count" "user_type" "user_arn" and detects anomaly based on the frequencies.
- Type: Anomaly
Product: Splunk Enterprise, Splunk Enterprise Security, Splunk Cloud
- Last Updated: 2023-04-10
- Author: Bhavin Patel, Splunk
- ID: e4384bbf-5835-4831-8d85-694de6ad2cc6
Kill Chain Phase
- CIS 10
1 2 3 4 5 6 `cloudtrail` eventName=GetObject | bin _time span=10m | stats count values(requestParameters.bucketName) as bucketName by _time src_ip aws_account_id user_type user_arn userIdentity.principalId | anomalydetection "count" "user_type" "user_arn" action=annotate | search probable_cause=* |`aws_exfiltration_via_anomalous_getobject_api_activity_filter`
The SPL above uses the following Macros:
aws_exfiltration_via_anomalous_getobject_api_activity_filter is a empty macro by default. It allows the user to filter out any results (false positives) without editing the SPL.
List of fields required to use this analytic.
How To Implement
You must install splunk AWS add on and Splunk App for AWS. This search works with AWS CloudTrail logs.
Known False Positives
It is possible that a user downloaded these files to use them locally and there are AWS services in configured that perform these activities for a legitimate reason. Filter is needed.
Associated Analytic Story
|64.0||80||80||Anomalous S3 activities detected by user $user_arn$ from $src_ip$|
The Risk Score is calculated by the following formula: Risk Score = (Impact * Confidence/100). Initial Confidence and Impact is set by the analytic author.
source | version: 1