Friday, December 17, 2021

AWS WAF log4j query

How to query AWS WAF log for log4j attacks

1. Setup your Athena table using this instruction

2. Use this query
SELECT 
  *,
  unnested.labels.name 
FROM "my_db"."waf_logs" 
CROSS JOIN UNNEST(labels) UNNESTED (labels)
  where unnested.labels.name like '%Log4JRCE%'

Monday, August 23, 2021

Python Hints

Python hints

Untrusted target

If you get untrusted cert error when trying to download library because your firewall, you can do this (only if you really trust the source).
python -m pip install --upgrade pip --trusted-host files.pythonhosted.org --trusted-host pypi.org

You can also put the untrusted cert in the location defined by this environment variable: REQUESTS_CA_BUNDLE

Working with venv

1. Create a new environment, this will create a directory named some_name from where you execute this
python -m venv some_name
You should see Include, Lib, and Scripts folders and pyvenv.cfg file in this directory
2. Go into that directory then activate the environment
./Scripts/activate
3. Now you should see (some_name) in from of your prompt. Any python code (and Pip) action taken here is isolated to this venv. 
4. To exit just type deactivate from this directory


Sorting a List of Dictionary

sorted(ListOfDict, key=(lambda item: item['SomeKeyName']))






Tuesday, August 3, 2021

Run Terraform in Docker

How to run terraform in container!

I am running this from Docker Desktop 2.3.0.2. on Windows 10. I am on my work network which brings special certificate issue.

Going to use this official Hashicorp Terraform Image.

Creating New Image to incorporate your certificate

Create a new dockerfile and insert following. Be sure to have your PEM file in the same directory. 
##Pull down the latest version of terraform from Hashi
FROM hashicorp/terraform:light
##Need this else you get cert trust error
COPY "myWork.pem" "/usr/local/share/ca-certificates/"
##Need this to apply the new cert (above) on this box
##https://manpages.ubuntu.com/manpages/xenial/man8/update-ca-certificates.8.html#name
RUN "/usr/sbin/update-ca-certificates"

##Need this so that it runs terraform upon launch
ENTRYPOINT ["/bin/terraform"]

Run following command to build your image
docker build -t terraform:latest .

Now you should bee a new image...
docker image ls

Create a terraform launcher

Now you can launch this image every time you want to invoke Terraform. 
docker run --rm -it terraform:latest -version

But that's not very helpful. So we'll need to attach some volumes to make this useful. See this link for details. This is a little more useful call to this image from docker
docker run --rm -it -e TF_LOG=%debugVar% -e TF_CLI_CONFIG_FILE=%TF_CLI_CONFIG_FILE_NEW% -v %cd%:/data -v %tf_config%:/terraform -w /data -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker terraform:latest %newarg%

Now we'll make this into a Batch Script and ensure it's accessible from %PATH%. You can also create an Alias if you have permission to edit Registry. 

This Batch Script is gonna add some extra features
  • Enable toggling Debug at will
  • Ability to pass in Terraform Config
  • Ability to write and read back credential via Terraform Login
@echo off
:dockerizedTerraform
setlocal enabledelayedexpansion
:: Initiall set this to 0, remember /A means this is number type
set /A debug = 0
::Set all the incoming argument into another variable, didn't know how to work with %*
set "args=%*"
::loop through the arguments, when something we want to is found, flag it
:: if there are more special flags we need to catch then just them here
:: be sure to put quote around both side of comparison
for %%x in (%*) do (
  if "%%x" == "-debug" set /A debug = 1
)
:: If debug flag was set to 1 then remove -debug from the args
if %debug%==1 (
  set "newarg=%args:-debug= %"
  set "debugVar=DEBUG"
) else (
  set "newarg=%args%"
  set "debugVar= "
)
:: use -e for passing in environment variables to Docker container
::Need to pass in environment variable for the token file
:: but we need to mount the volume and pass in the remote-end equivalent 
FOR %%i IN ("%TF_CLI_CONFIG_FILE%") DO (
  :: get the folder path
  set "tf_config=%%~di%%~pi"
  :: get the file name and extension
  set "tf_config_file=%%~ni%%~xi"
)
::This will be the mount point for the terraform configuration file
set "TF_CONFIG_PATH=terraform"
::THis will be the new config file location in the remote-end
set "TF_CLI_CONFIG_FILE_NEW=/%TF_CONFIG_PATH%/%tf_config_file%"

docker run --rm -it -e TF_LOG=%debugVar% -e TF_CLI_CONFIG_FILE=%TF_CLI_CONFIG_FILE_NEW% -v %cd%:/data -v %tf_config%:/terraform -w /data -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker terraform:latest %newarg%

You can download the files here.

How to get your PEM







Monday, July 26, 2021

Creating AWS Lambda Layer

Creating AWS Lambda Layer

How to create AWS Lambda Layer (for Python)

From Library

Be sure to work from Python Virtual Environment. Always work from venv. Follow this link. You can also see here.

1. Install Libraries that you want

python -m pip install requests

2. Show it's dependencies (you'll also see it being installed). You'll see Requires line with certifi, urllib3, charset-normalizer, idna

pip show requests

3. Open the folder /Lib/site-packages

4. Copy all the listed Requires packages into a separate directory called python

5. Zip up this folder. Resulting zip and content




6. Terraform Code to use 

locals{
    layer_file = "${path.module}/requests_2_26_0.zip"
}

resource "aws_lambda_layer_version" "requests" {
  filename            = local.layer_file
  layer_name          = "requests"
  compatible_runtimes = ["python3.8"]
  source_code_hash    = data.archive_file.requests.output_base64sha256
}


From Custom Code

1. Create a new folder called python
2. Drop your python code into this folder
3. Create a zip file where python is at root
4. Terraform Code to use (this code will handle the zipping). Your python code should be located at /layer_code/python/my_code.py
locals{
    layer_file = "${path.module}/zips/new_layer.zip"
}
data "archive_file" "requests" {
  type             = "zip"
  output_path      = local.layer_file
  source_dir       = "${path.module}/layer_code"
  output_file_mode = "0666"
}
resource "aws_lambda_layer_version" "new_layer" {
  filename            = local.layer_file
  layer_name          = "new_layer"
  compatible_runtimes = ["python3.8"]
  source_code_hash    = data.archive_file.requests.output_base64sha256
}

Monday, July 19, 2021

AWS Security Hub Auto Remediation

 AWS Security Hub Auto Remediation

In this guide, we configure Security Hub account to be able to take action on any other accounts in the Organization. This is all triggered from CloudWatch Event Pattern. The result looks like this.




Cloudwatch Event Pattern

Here's an example of the CW Event Pattern. 

resource "aws_cloudwatch_event_rule" "example" {
  name        = "Example"
  description = "Example"

  event_pattern = <<EOF
{
    "source" : [
      "aws.securityhub"
    ],
    "detail-type" : [
      "Security Hub Findings - Imported"
    ],
    "detail" : {
      "findings" : {
         "GeneratorId":["arn:aws:securityhub:::ruleset/cis-aws-foundations-benchmark/v/1.2.0/rule/4.3"],
         "Compliance":{
            "Status":["FAILED"]
        }
      }
    }
  }
EOF
}

Cloudwatch Event Targets

SNS

Pass in a Topic arn and enable this target. 

resource "aws_cloudwatch_event_target" "cloudwatch_event_target_sns" {
  count = local.sns_arn == null ? 0:1
  ## [\.\-_A-Za-z0-9]+, 64 characters max
  rule = aws_cloudwatch_event_rule.cloudwatch_event_rule.name
  ## some unique assignment ID, random will be assigned since not provided
  ## target_id
  ## This is the ARN of the target regardless of the type
  arn = local.sns_arn
}

Be sure the SNS topic permission include this 

{
 "Sid": "allow_from_events",
 "Effect": "Allow",
 "Principal": {
   "Service": "events.amazonaws.com"
 },
 "Action": "SNS:Publish",
 "Resource": "arn:aws:sns:us-east-1:99999999999:security-hub-topic"
}

Lambda

Point to Lambda Function arn and enable this target.

resource "aws_cloudwatch_event_target" "cloudwatch_event_target_lambda" {
  count = local.lambda_arn == null ? 0:1
  ## [\.\-_A-Za-z0-9]+, 64 characters max
  rule = aws_cloudwatch_event_rule.cloudwatch_event_rule.name
  ## some unique assignment ID, random will be assigned since not provided
  ## target_id
  ## This is the ARN of the target regardless of the type
  arn = local.lambda_arn
}

Lambda Function

Create the lambda function here

resource "aws_lambda_function" "default" {
  ## ([a-zA-Z0-9-_]+)
  function_name    = local.rule_name
  filename         = local.lambda_zip_file
  role             = local.lambda_role_arn
  source_code_hash = local.lambda_hash

  description = "blah" 
  handler     = "lambda_function.lambda_handler" 
  runtime     = "python3.8"
  timeout     = 60
  memory_size = 128
}

Lambda Role

Permission Policy

data "aws_iam_policy_document" "lambda_role_basic" {
  ## Create log group
  statement {
    sid       = "SidLogGroup"
    actions   = ["logs:CreateLogGroup"]
    resources = ["arn:aws:logs:*:${local.self_account_id}:*"]
  }

  ## Create log stream
  statement {
    sid = "SidStream"
    actions = [
      "logs:PutLogEvents",
      "logs:CreateLogStream"
    ]
    resources = ["arn:aws:logs:*:${local.self_account_id}:log-group:/aws/lambda/${var.rule_name_prefix}*:*"]
  }

  ## Allow this role to assume target roles in other accounts
  statement {
    sid       = "SidAssumeAccountRoles"
    actions   = ["sts:AssumeRole"]
    resources = ["arn:aws:iam::*:role/${var.target_role_name}"]
  }
}

resource "aws_iam_policy" "lambda_role_basic" {
  name   = "${var.target_role_name}-policy"
  policy = data.aws_iam_policy_document.lambda_role_basic.json
}

Assume Role Policy

data "aws_iam_policy_document" "lambda_role_assume_policy" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }
  }
}

Role

resource "aws_iam_role" "lambda_role" {
  ## Create a new role and start with Assume Role Policy to let it be used by Lambda
  ## This is the role that is added to Lambda above
  name               = var.lambda_role_name
  assume_role_policy = data.aws_iam_policy_document.lambda_role_assume_policy.json
}

resource "aws_iam_role_policy_attachment" "lambda_role_basic" {
  ## Attach the permission policy defined above
  role       = aws_iam_role.lambda_role.name
  policy_arn = aws_iam_policy.lambda_role_basic.arn
}

Target Account's Role

Assume Role Policy

data "aws_iam_policy_document" "security_hub_assume_role_policy" {
  ## Allow the role from Main Account to assume this role
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "AWS"
      identifiers = ["arn:aws:iam::${local.sechub_account_id}:role/${local.security_hub_lambda_role_name}"]
    }
  }
}

Role

resource "aws_iam_role" "security_hub_role" {
  ## Create a new role, start with Assume Role Policy
  name               = local.security_hub_role_name
  assume_role_policy = data.aws_iam_policy_document.security_hub_assume_role_policy.json
}

Permission Policies

  • Give SecHub account permission to take action inside this account when something triggers in SecHub
  • This role needs to be assumable from SecHub account
  • This role needs to do whatever we need to do to remediate any SecHub findings
  • Can only have up to 20 attached policies to a role (10 is default, 20 is when upped the limit to max), so be wise how you split this
  • Each policy can only be up to 5000 characters (1500 is default, 5000 is when upped the limit to max), so be wise how you word your policy
  • The roles names of assumed and assumer needs to match exactly

ReadOnly Policy

resource "aws_iam_role_policy_attachment" "security_hub_read_policy" {
  ## Use built-in policy for this
  role       = aws_iam_role.security_hub_role.id
  policy_arn = "arn:aws:iam::aws:policy/ReadOnlyAccess"
}

Ability to Update Security Hub Finding Policy

data "aws_iam_policy_document" "security_hub_edit_sechub_policy_doc" {
  statement {
    actions = [
      "securityhub:CreateActionTarget",
      "securityhub:UpdateFindings",
      "securityhub:BatchDisableStandards",
    ]
    resources = ["arn:aws:securityhub:*:${local.account_id}:hub/default"]
  }
}

resource "aws_iam_policy" "security_hub_edit_sechub_policy" {
  name   = "Security-hub-edit-sechub-policy"
  path   = "/"
  policy = data.aws_iam_policy_document.security_hub_edit_sechub_policy_doc.json
}

resource "aws_iam_role_policy_attachment" "security_hub_edit_sechub_policy_attach" {
  role       = aws_iam_role.security_hub_role.id
  policy_arn = aws_iam_policy.security_hub_edit_sechub_policy.arn
}

Ability to Edit IAM Policy

data "aws_iam_policy_document" "security_hub_edit_IAM_policy_doc" {
  statement {
    actions = [
      "IAM:DeleteAccessKey",
      "IAM:Detach*",
      "IAM:DeleteUserPolicy"
    ]
    resources = ["*"]
  }
}

resource "aws_iam_policy" "security_hub_edit_IAM_policy" {
  name   = "Security-hub-edit-IAM-policy"
  path   = "/"
  policy = data.aws_iam_policy_document.security_hub_edit_IAM_policy_doc.json
}

resource "aws_iam_role_policy_attachment" "security_hub_edit_IAM_policy_attach" {
  role       = aws_iam_role.security_hub_role.id
  policy_arn = aws_iam_policy.security_hub_edit_IAM_policy.arn
}





















AWS WAF log4j query

How to query AWS WAF log for log4j attacks 1. Setup your Athena table using this instruction https://docs.aws.amazon.com/athena/latest/ug/wa...