How to query AWS WAF log for log4j attacks
SELECT *, unnested.labels.name FROM "my_db"."waf_logs" CROSS JOIN UNNEST(labels) UNNESTED (labels) where unnested.labels.name like '%Log4JRCE%'
Stuff to remember...
SELECT *, unnested.labels.name FROM "my_db"."waf_logs" CROSS JOIN UNNEST(labels) UNNESTED (labels) where unnested.labels.name like '%Log4JRCE%'
python -m pip install --upgrade pip --trusted-host files.pythonhosted.org --trusted-host pypi.org
python -m venv some_name
./Scripts/activate
I am running this from Docker Desktop 2.3.0.2. on Windows 10. I am on my work network which brings special certificate issue.
Going to use this official Hashicorp Terraform Image.
##Pull down the latest version of terraform from Hashi FROM hashicorp/terraform:light ##Need this else you get cert trust error COPY "myWork.pem" "/usr/local/share/ca-certificates/" ##Need this to apply the new cert (above) on this box ##https://manpages.ubuntu.com/manpages/xenial/man8/update-ca-certificates.8.html#name RUN "/usr/sbin/update-ca-certificates" ##Need this so that it runs terraform upon launch ENTRYPOINT ["/bin/terraform"]
docker build -t terraform:latest .
docker image ls
docker run --rm -it terraform:latest -version
docker run --rm -it -e TF_LOG=%debugVar% -e TF_CLI_CONFIG_FILE=%TF_CLI_CONFIG_FILE_NEW% -v %cd%:/data -v %tf_config%:/terraform -w /data -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker terraform:latest %newarg%
@echo off :dockerizedTerraform setlocal enabledelayedexpansion :: Initiall set this to 0, remember /A means this is number type set /A debug = 0 ::Set all the incoming argument into another variable, didn't know how to work with %* set "args=%*" ::loop through the arguments, when something we want to is found, flag it :: if there are more special flags we need to catch then just them here :: be sure to put quote around both side of comparison for %%x in (%*) do ( if "%%x" == "-debug" set /A debug = 1 ) :: If debug flag was set to 1 then remove -debug from the args if %debug%==1 ( set "newarg=%args:-debug= %" set "debugVar=DEBUG" ) else ( set "newarg=%args%" set "debugVar= " ) :: use -e for passing in environment variables to Docker container ::Need to pass in environment variable for the token file :: but we need to mount the volume and pass in the remote-end equivalent FOR %%i IN ("%TF_CLI_CONFIG_FILE%") DO ( :: get the folder path set "tf_config=%%~di%%~pi" :: get the file name and extension set "tf_config_file=%%~ni%%~xi" ) ::This will be the mount point for the terraform configuration file set "TF_CONFIG_PATH=terraform" ::THis will be the new config file location in the remote-end set "TF_CLI_CONFIG_FILE_NEW=/%TF_CONFIG_PATH%/%tf_config_file%" docker run --rm -it -e TF_LOG=%debugVar% -e TF_CLI_CONFIG_FILE=%TF_CLI_CONFIG_FILE_NEW% -v %cd%:/data -v %tf_config%:/terraform -w /data -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker terraform:latest %newarg%
Be sure to work from Python Virtual Environment. Always work from venv. Follow this link. You can also see here.
1. Install Libraries that you want
python -m pip install requests
2. Show it's dependencies (you'll also see it being installed). You'll see Requires line with certifi, urllib3, charset-normalizer, idna
pip show requests
3. Open the folder /Lib/site-packages
4. Copy all the listed Requires packages into a separate directory called python
5. Zip up this folder. Resulting zip and content
locals{
layer_file = "${path.module}/requests_2_26_0.zip"
}
resource "aws_lambda_layer_version" "requests" {
filename = local.layer_file
layer_name = "requests"
compatible_runtimes = ["python3.8"]
source_code_hash = data.archive_file.requests.output_base64sha256
}
locals{
layer_file = "${path.module}/zips/new_layer.zip"
}
data "archive_file" "requests" {
type = "zip"
output_path = local.layer_file
source_dir = "${path.module}/layer_code"
output_file_mode = "0666"
}
resource "aws_lambda_layer_version" "new_layer" {
filename = local.layer_file
layer_name = "new_layer"
compatible_runtimes = ["python3.8"]
source_code_hash = data.archive_file.requests.output_base64sha256
}
In this guide, we configure Security Hub account to be able to take action on any other accounts in the Organization. This is all triggered from CloudWatch Event Pattern. The result looks like this.
resource "aws_cloudwatch_event_rule" "example" { name = "Example" description = "Example" event_pattern = <<EOF { "source" : [ "aws.securityhub" ], "detail-type" : [ "Security Hub Findings - Imported" ], "detail" : { "findings" : { "GeneratorId":["arn:aws:securityhub:::ruleset/cis-aws-foundations-benchmark/v/1.2.0/rule/4.3"], "Compliance":{ "Status":["FAILED"] } } } } EOF }
resource "aws_cloudwatch_event_target" "cloudwatch_event_target_sns" { count = local.sns_arn == null ? 0:1 ## [\.\-_A-Za-z0-9]+, 64 characters max rule = aws_cloudwatch_event_rule.cloudwatch_event_rule.name ## some unique assignment ID, random will be assigned since not provided ## target_id ## This is the ARN of the target regardless of the type arn = local.sns_arn }
Be sure the SNS topic permission include this
{
"Sid": "allow_from_events",
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:us-east-1:99999999999:security-hub-topic"
}
resource "aws_cloudwatch_event_target" "cloudwatch_event_target_lambda" { count = local.lambda_arn == null ? 0:1 ## [\.\-_A-Za-z0-9]+, 64 characters max rule = aws_cloudwatch_event_rule.cloudwatch_event_rule.name ## some unique assignment ID, random will be assigned since not provided ## target_id ## This is the ARN of the target regardless of the type arn = local.lambda_arn }
Create the lambda function here
resource "aws_lambda_function" "default" { ## ([a-zA-Z0-9-_]+) function_name = local.rule_name filename = local.lambda_zip_file role = local.lambda_role_arn source_code_hash = local.lambda_hash description = "blah" handler = "lambda_function.lambda_handler" runtime = "python3.8" timeout = 60 memory_size = 128 }
data "aws_iam_policy_document" "lambda_role_basic" { ## Create log group statement { sid = "SidLogGroup" actions = ["logs:CreateLogGroup"] resources = ["arn:aws:logs:*:${local.self_account_id}:*"] } ## Create log stream statement { sid = "SidStream" actions = [ "logs:PutLogEvents", "logs:CreateLogStream" ] resources = ["arn:aws:logs:*:${local.self_account_id}:log-group:/aws/lambda/${var.rule_name_prefix}*:*"] } ## Allow this role to assume target roles in other accounts statement { sid = "SidAssumeAccountRoles" actions = ["sts:AssumeRole"] resources = ["arn:aws:iam::*:role/${var.target_role_name}"] } } resource "aws_iam_policy" "lambda_role_basic" { name = "${var.target_role_name}-policy" policy = data.aws_iam_policy_document.lambda_role_basic.json }
data "aws_iam_policy_document" "lambda_role_assume_policy" { statement { actions = ["sts:AssumeRole"] principals { type = "Service" identifiers = ["lambda.amazonaws.com"] } } }
resource "aws_iam_role" "lambda_role" { ## Create a new role and start with Assume Role Policy to let it be used by Lambda
## This is the role that is added to Lambda above name = var.lambda_role_name assume_role_policy = data.aws_iam_policy_document.lambda_role_assume_policy.json } resource "aws_iam_role_policy_attachment" "lambda_role_basic" { ## Attach the permission policy defined above role = aws_iam_role.lambda_role.name policy_arn = aws_iam_policy.lambda_role_basic.arn }
data "aws_iam_policy_document" "security_hub_assume_role_policy" { ## Allow the role from Main Account to assume this role statement { actions = ["sts:AssumeRole"] principals { type = "AWS" identifiers = ["arn:aws:iam::${local.sechub_account_id}:role/${local.security_hub_lambda_role_name}"] } } }
resource "aws_iam_role" "security_hub_role" { ## Create a new role, start with Assume Role Policy name = local.security_hub_role_name assume_role_policy = data.aws_iam_policy_document.security_hub_assume_role_policy.json }
resource "aws_iam_role_policy_attachment" "security_hub_read_policy" { ## Use built-in policy for this role = aws_iam_role.security_hub_role.id policy_arn = "arn:aws:iam::aws:policy/ReadOnlyAccess" }
data "aws_iam_policy_document" "security_hub_edit_sechub_policy_doc" { statement { actions = [ "securityhub:CreateActionTarget", "securityhub:UpdateFindings", "securityhub:BatchDisableStandards", ] resources = ["arn:aws:securityhub:*:${local.account_id}:hub/default"] } } resource "aws_iam_policy" "security_hub_edit_sechub_policy" { name = "Security-hub-edit-sechub-policy" path = "/" policy = data.aws_iam_policy_document.security_hub_edit_sechub_policy_doc.json } resource "aws_iam_role_policy_attachment" "security_hub_edit_sechub_policy_attach" { role = aws_iam_role.security_hub_role.id policy_arn = aws_iam_policy.security_hub_edit_sechub_policy.arn }
data "aws_iam_policy_document" "security_hub_edit_IAM_policy_doc" { statement { actions = [ "IAM:DeleteAccessKey", "IAM:Detach*", "IAM:DeleteUserPolicy" ] resources = ["*"] } } resource "aws_iam_policy" "security_hub_edit_IAM_policy" { name = "Security-hub-edit-IAM-policy" path = "/" policy = data.aws_iam_policy_document.security_hub_edit_IAM_policy_doc.json } resource "aws_iam_role_policy_attachment" "security_hub_edit_IAM_policy_attach" { role = aws_iam_role.security_hub_role.id policy_arn = aws_iam_policy.security_hub_edit_IAM_policy.arn }
If you want to send alerts directly from EventBridge to SQS, you must modify your SQS to accept it from the Rule.
{
"Sid": "EventsToMyQueue",
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:region:account-id:queue-name",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:events:region:account-id:rule/rule-name"
}
}
}Reference: SQS Permissions.
This is fine if you know the arn of the rule in advance. However, if you require this SQS queue to receive messages from rules unknown and you want to limit who or what can send to this queue then you can't seem to use direct to SQS.
Because we're on an AWS Org, I tried this rule instead but unfortunately, services doesn't bring the necessary PrincipalOrgId information. Or I did it wrong...
{
"Sid": "doesnt_work",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:region:account-id:queue-name",
"Condition": {
"StringEquals": {
"aws:PrincipalOrgID": "o-9999999999"
}
}
}
Instead of sending directly to SQS, we can create a Lambda in conjunction with EventBridge rule and attach the necessary role with permission onto the Lambda function and then use PrincipalOrgId condition from SQS.
When you create your EventBridge and Lambda target, following permission needs to be attached to the Lambda.
{
"Effect": "Allow",
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:region:account-id:function:function-name",
"Principal": {
"Service": "events.amazonaws.com"
},
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:events:region:account-id:rule/rule-name"
}
},
"Sid": "InvokeLambdaFunction"
}
Reference: Lambda Permissions.
And the IAM Role attached to the Lambda function needs to have (at least) this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SidOrgLook",
"Effect": "Allow",
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:region:account-id:queue-name"
}
]
}
The lambda would have a function like this to send to queue:
def send_message(message,region): queue_name = os.environ.get("queue_name","generic_queue") sqs = boto3.resource('sqs', region_name = region) try: queue = sqs.get_queue_by_name(QueueName=queue_name) except: print("Error in fetching queue from name") return False try: response = queue.send_message(MessageBody=message) except: print("Error sending message") return False return response.get('MessageId')
Then the SQS itself has this Access Policy:
{
"Sid": "sqsQueueOrgSendPolicy",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"SQS:SendMessage"
],
"Resource": "arn:aws:sqs:region:account-id:queue-name",
"Condition": {
"StringEquals": {
"aws:PrincipalOrgID": "o-9999999999"
}
}
Don't copy & paste policy into SQS Access Policy Web Console. I kept running into invalid JSON error. Do it using API call instead.
Go to Pipelines, Click New Pipeline
Select Azure Repos Git
Select Starter Pipeline
Clear content
Paste this
name: '$(BuildDefinitionName)_$(Build.BuildId)' trigger: branches: include: - master variables: major: '1' minor: '0' revision: $[counter(variables['minor'], 1)] MODULEVERSION: '$(major).$(minor).$(revision)'
In the assistant pane, search for Replace Tokens
In the assistant pane, search for Nuget (just regular Nuget) and enter as follows.
The resulting task:
Back to assistant pane, search again for Nuget
The resulting task:
That's it!
Save and run it.
How to query AWS WAF log for log4j attacks 1. Setup your Athena table using this instruction https://docs.aws.amazon.com/athena/latest/ug/wa...