Friday, December 17, 2021

AWS WAF log4j query

How to query AWS WAF log for log4j attacks

1. Setup your Athena table using this instruction

2. Use this query
SELECT 
  *,
  unnested.labels.name 
FROM "my_db"."waf_logs" 
CROSS JOIN UNNEST(labels) UNNESTED (labels)
  where unnested.labels.name like '%Log4JRCE%'

Monday, August 23, 2021

Python Hints

Python hints

Untrusted target

If you get untrusted cert error when trying to download library because your firewall, you can do this (only if you really trust the source).
python -m pip install --upgrade pip --trusted-host files.pythonhosted.org --trusted-host pypi.org

You can also put the untrusted cert in the location defined by this environment variable: REQUESTS_CA_BUNDLE

Working with venv

1. Create a new environment, this will create a directory named some_name from where you execute this
python -m venv some_name
You should see Include, Lib, and Scripts folders and pyvenv.cfg file in this directory
2. Go into that directory then activate the environment
./Scripts/activate
3. Now you should see (some_name) in from of your prompt. Any python code (and Pip) action taken here is isolated to this venv. 
4. To exit just type deactivate from this directory


Sorting a List of Dictionary

sorted(ListOfDict, key=(lambda item: item['SomeKeyName']))






Tuesday, August 3, 2021

Run Terraform in Docker

How to run terraform in container!

I am running this from Docker Desktop 2.3.0.2. on Windows 10. I am on my work network which brings special certificate issue.

Going to use this official Hashicorp Terraform Image.

Creating New Image to incorporate your certificate

Create a new dockerfile and insert following. Be sure to have your PEM file in the same directory. 
##Pull down the latest version of terraform from Hashi
FROM hashicorp/terraform:light
##Need this else you get cert trust error
COPY "myWork.pem" "/usr/local/share/ca-certificates/"
##Need this to apply the new cert (above) on this box
##https://manpages.ubuntu.com/manpages/xenial/man8/update-ca-certificates.8.html#name
RUN "/usr/sbin/update-ca-certificates"

##Need this so that it runs terraform upon launch
ENTRYPOINT ["/bin/terraform"]

Run following command to build your image
docker build -t terraform:latest .

Now you should bee a new image...
docker image ls

Create a terraform launcher

Now you can launch this image every time you want to invoke Terraform. 
docker run --rm -it terraform:latest -version

But that's not very helpful. So we'll need to attach some volumes to make this useful. See this link for details. This is a little more useful call to this image from docker
docker run --rm -it -e TF_LOG=%debugVar% -e TF_CLI_CONFIG_FILE=%TF_CLI_CONFIG_FILE_NEW% -v %cd%:/data -v %tf_config%:/terraform -w /data -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker terraform:latest %newarg%

Now we'll make this into a Batch Script and ensure it's accessible from %PATH%. You can also create an Alias if you have permission to edit Registry. 

This Batch Script is gonna add some extra features
  • Enable toggling Debug at will
  • Ability to pass in Terraform Config
  • Ability to write and read back credential via Terraform Login
@echo off
:dockerizedTerraform
setlocal enabledelayedexpansion
:: Initiall set this to 0, remember /A means this is number type
set /A debug = 0
::Set all the incoming argument into another variable, didn't know how to work with %*
set "args=%*"
::loop through the arguments, when something we want to is found, flag it
:: if there are more special flags we need to catch then just them here
:: be sure to put quote around both side of comparison
for %%x in (%*) do (
  if "%%x" == "-debug" set /A debug = 1
)
:: If debug flag was set to 1 then remove -debug from the args
if %debug%==1 (
  set "newarg=%args:-debug= %"
  set "debugVar=DEBUG"
) else (
  set "newarg=%args%"
  set "debugVar= "
)
:: use -e for passing in environment variables to Docker container
::Need to pass in environment variable for the token file
:: but we need to mount the volume and pass in the remote-end equivalent 
FOR %%i IN ("%TF_CLI_CONFIG_FILE%") DO (
  :: get the folder path
  set "tf_config=%%~di%%~pi"
  :: get the file name and extension
  set "tf_config_file=%%~ni%%~xi"
)
::This will be the mount point for the terraform configuration file
set "TF_CONFIG_PATH=terraform"
::THis will be the new config file location in the remote-end
set "TF_CLI_CONFIG_FILE_NEW=/%TF_CONFIG_PATH%/%tf_config_file%"

docker run --rm -it -e TF_LOG=%debugVar% -e TF_CLI_CONFIG_FILE=%TF_CLI_CONFIG_FILE_NEW% -v %cd%:/data -v %tf_config%:/terraform -w /data -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker terraform:latest %newarg%

You can download the files here.

How to get your PEM







Monday, July 26, 2021

Creating AWS Lambda Layer

Creating AWS Lambda Layer

How to create AWS Lambda Layer (for Python)

From Library

Be sure to work from Python Virtual Environment. Always work from venv. Follow this link. You can also see here.

1. Install Libraries that you want

python -m pip install requests

2. Show it's dependencies (you'll also see it being installed). You'll see Requires line with certifi, urllib3, charset-normalizer, idna

pip show requests

3. Open the folder /Lib/site-packages

4. Copy all the listed Requires packages into a separate directory called python

5. Zip up this folder. Resulting zip and content




6. Terraform Code to use 

locals{
    layer_file = "${path.module}/requests_2_26_0.zip"
}

resource "aws_lambda_layer_version" "requests" {
  filename            = local.layer_file
  layer_name          = "requests"
  compatible_runtimes = ["python3.8"]
  source_code_hash    = data.archive_file.requests.output_base64sha256
}


From Custom Code

1. Create a new folder called python
2. Drop your python code into this folder
3. Create a zip file where python is at root
4. Terraform Code to use (this code will handle the zipping). Your python code should be located at /layer_code/python/my_code.py
locals{
    layer_file = "${path.module}/zips/new_layer.zip"
}
data "archive_file" "requests" {
  type             = "zip"
  output_path      = local.layer_file
  source_dir       = "${path.module}/layer_code"
  output_file_mode = "0666"
}
resource "aws_lambda_layer_version" "new_layer" {
  filename            = local.layer_file
  layer_name          = "new_layer"
  compatible_runtimes = ["python3.8"]
  source_code_hash    = data.archive_file.requests.output_base64sha256
}

Monday, July 19, 2021

AWS Security Hub Auto Remediation

 AWS Security Hub Auto Remediation

In this guide, we configure Security Hub account to be able to take action on any other accounts in the Organization. This is all triggered from CloudWatch Event Pattern. The result looks like this.




Cloudwatch Event Pattern

Here's an example of the CW Event Pattern. 

resource "aws_cloudwatch_event_rule" "example" {
  name        = "Example"
  description = "Example"

  event_pattern = <<EOF
{
    "source" : [
      "aws.securityhub"
    ],
    "detail-type" : [
      "Security Hub Findings - Imported"
    ],
    "detail" : {
      "findings" : {
         "GeneratorId":["arn:aws:securityhub:::ruleset/cis-aws-foundations-benchmark/v/1.2.0/rule/4.3"],
         "Compliance":{
            "Status":["FAILED"]
        }
      }
    }
  }
EOF
}

Cloudwatch Event Targets

SNS

Pass in a Topic arn and enable this target. 

resource "aws_cloudwatch_event_target" "cloudwatch_event_target_sns" {
  count = local.sns_arn == null ? 0:1
  ## [\.\-_A-Za-z0-9]+, 64 characters max
  rule = aws_cloudwatch_event_rule.cloudwatch_event_rule.name
  ## some unique assignment ID, random will be assigned since not provided
  ## target_id
  ## This is the ARN of the target regardless of the type
  arn = local.sns_arn
}

Be sure the SNS topic permission include this 

{
 "Sid": "allow_from_events",
 "Effect": "Allow",
 "Principal": {
   "Service": "events.amazonaws.com"
 },
 "Action": "SNS:Publish",
 "Resource": "arn:aws:sns:us-east-1:99999999999:security-hub-topic"
}

Lambda

Point to Lambda Function arn and enable this target.

resource "aws_cloudwatch_event_target" "cloudwatch_event_target_lambda" {
  count = local.lambda_arn == null ? 0:1
  ## [\.\-_A-Za-z0-9]+, 64 characters max
  rule = aws_cloudwatch_event_rule.cloudwatch_event_rule.name
  ## some unique assignment ID, random will be assigned since not provided
  ## target_id
  ## This is the ARN of the target regardless of the type
  arn = local.lambda_arn
}

Lambda Function

Create the lambda function here

resource "aws_lambda_function" "default" {
  ## ([a-zA-Z0-9-_]+)
  function_name    = local.rule_name
  filename         = local.lambda_zip_file
  role             = local.lambda_role_arn
  source_code_hash = local.lambda_hash

  description = "blah" 
  handler     = "lambda_function.lambda_handler" 
  runtime     = "python3.8"
  timeout     = 60
  memory_size = 128
}

Lambda Role

Permission Policy

data "aws_iam_policy_document" "lambda_role_basic" {
  ## Create log group
  statement {
    sid       = "SidLogGroup"
    actions   = ["logs:CreateLogGroup"]
    resources = ["arn:aws:logs:*:${local.self_account_id}:*"]
  }

  ## Create log stream
  statement {
    sid = "SidStream"
    actions = [
      "logs:PutLogEvents",
      "logs:CreateLogStream"
    ]
    resources = ["arn:aws:logs:*:${local.self_account_id}:log-group:/aws/lambda/${var.rule_name_prefix}*:*"]
  }

  ## Allow this role to assume target roles in other accounts
  statement {
    sid       = "SidAssumeAccountRoles"
    actions   = ["sts:AssumeRole"]
    resources = ["arn:aws:iam::*:role/${var.target_role_name}"]
  }
}

resource "aws_iam_policy" "lambda_role_basic" {
  name   = "${var.target_role_name}-policy"
  policy = data.aws_iam_policy_document.lambda_role_basic.json
}

Assume Role Policy

data "aws_iam_policy_document" "lambda_role_assume_policy" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }
  }
}

Role

resource "aws_iam_role" "lambda_role" {
  ## Create a new role and start with Assume Role Policy to let it be used by Lambda
  ## This is the role that is added to Lambda above
  name               = var.lambda_role_name
  assume_role_policy = data.aws_iam_policy_document.lambda_role_assume_policy.json
}

resource "aws_iam_role_policy_attachment" "lambda_role_basic" {
  ## Attach the permission policy defined above
  role       = aws_iam_role.lambda_role.name
  policy_arn = aws_iam_policy.lambda_role_basic.arn
}

Target Account's Role

Assume Role Policy

data "aws_iam_policy_document" "security_hub_assume_role_policy" {
  ## Allow the role from Main Account to assume this role
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "AWS"
      identifiers = ["arn:aws:iam::${local.sechub_account_id}:role/${local.security_hub_lambda_role_name}"]
    }
  }
}

Role

resource "aws_iam_role" "security_hub_role" {
  ## Create a new role, start with Assume Role Policy
  name               = local.security_hub_role_name
  assume_role_policy = data.aws_iam_policy_document.security_hub_assume_role_policy.json
}

Permission Policies

  • Give SecHub account permission to take action inside this account when something triggers in SecHub
  • This role needs to be assumable from SecHub account
  • This role needs to do whatever we need to do to remediate any SecHub findings
  • Can only have up to 20 attached policies to a role (10 is default, 20 is when upped the limit to max), so be wise how you split this
  • Each policy can only be up to 5000 characters (1500 is default, 5000 is when upped the limit to max), so be wise how you word your policy
  • The roles names of assumed and assumer needs to match exactly

ReadOnly Policy

resource "aws_iam_role_policy_attachment" "security_hub_read_policy" {
  ## Use built-in policy for this
  role       = aws_iam_role.security_hub_role.id
  policy_arn = "arn:aws:iam::aws:policy/ReadOnlyAccess"
}

Ability to Update Security Hub Finding Policy

data "aws_iam_policy_document" "security_hub_edit_sechub_policy_doc" {
  statement {
    actions = [
      "securityhub:CreateActionTarget",
      "securityhub:UpdateFindings",
      "securityhub:BatchDisableStandards",
    ]
    resources = ["arn:aws:securityhub:*:${local.account_id}:hub/default"]
  }
}

resource "aws_iam_policy" "security_hub_edit_sechub_policy" {
  name   = "Security-hub-edit-sechub-policy"
  path   = "/"
  policy = data.aws_iam_policy_document.security_hub_edit_sechub_policy_doc.json
}

resource "aws_iam_role_policy_attachment" "security_hub_edit_sechub_policy_attach" {
  role       = aws_iam_role.security_hub_role.id
  policy_arn = aws_iam_policy.security_hub_edit_sechub_policy.arn
}

Ability to Edit IAM Policy

data "aws_iam_policy_document" "security_hub_edit_IAM_policy_doc" {
  statement {
    actions = [
      "IAM:DeleteAccessKey",
      "IAM:Detach*",
      "IAM:DeleteUserPolicy"
    ]
    resources = ["*"]
  }
}

resource "aws_iam_policy" "security_hub_edit_IAM_policy" {
  name   = "Security-hub-edit-IAM-policy"
  path   = "/"
  policy = data.aws_iam_policy_document.security_hub_edit_IAM_policy_doc.json
}

resource "aws_iam_role_policy_attachment" "security_hub_edit_IAM_policy_attach" {
  role       = aws_iam_role.security_hub_role.id
  policy_arn = aws_iam_policy.security_hub_edit_IAM_policy.arn
}





















Friday, December 4, 2020

AWS EventBridge to SQS

How to create new EventBridge (CloudWatch Events) to send message to existing SQS queue

Event to SQS

If you want to send alerts directly from EventBridge to SQS, you must modify your SQS to accept it from the Rule. 

{
  "Sid": "EventsToMyQueue",
  "Effect": "Allow",
  "Principal": {
     "Service": "events.amazonaws.com"
  },
  "Action": "sqs:SendMessage",
  "Resource": "arn:aws:sqs:region:account-id:queue-name",
  "Condition": {
    "ArnEquals": {
      "aws:SourceArn": "arn:aws:events:region:account-id:rule/rule-name"
    }
  }
}

Reference: SQS Permissions.

This is fine if you know the arn of the rule in advance. However, if you require this SQS queue to receive messages from rules unknown and you want to limit who or what can send to this queue then you can't seem to use direct to SQS. 

Because we're on an AWS Org, I tried this rule instead but unfortunately, services doesn't bring the necessary PrincipalOrgId information. Or I did it wrong... 

{
      "Sid": "doesnt_work",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sqs:SendMessage",
      "Resource": "arn:aws:sqs:region:account-id:queue-name",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalOrgID": "o-9999999999"
        }
      }
    }

Event to Lambda to SQS

Instead of sending directly to SQS, we can create a Lambda in conjunction with EventBridge rule and attach the necessary role with permission onto the Lambda function and then use PrincipalOrgId condition from SQS.

When you create your EventBridge and Lambda target, following permission needs to be attached to the Lambda.

{
  "Effect": "Allow",
  "Action": "lambda:InvokeFunction",
  "Resource": "arn:aws:lambda:region:account-id:function:function-name",
  "Principal": {
    "Service": "events.amazonaws.com"
  },
  "Condition": {
    "ArnLike": {
      "AWS:SourceArn": "arn:aws:events:region:account-id:rule/rule-name"
    }
  },
  "Sid": "InvokeLambdaFunction"
}

Reference: Lambda Permissions.

And the IAM Role attached to the Lambda function needs to have (at least) this policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SidOrgLook",
            "Effect": "Allow",
            "Action": "sqs:SendMessage",
            "Resource": "arn:aws:sqs:region:account-id:queue-name"
        }
    ]
}

The lambda would have a function like this to send to queue:

def send_message(message,region):
    queue_name = os.environ.get("queue_name","generic_queue")
    sqs = boto3.resource('sqs', region_name = region)
    try:
        queue = sqs.get_queue_by_name(QueueName=queue_name)
    except:
        print("Error in fetching queue from name")
        return False
    try:
        response = queue.send_message(MessageBody=message)
    except:
        print("Error sending message")
        return False
    return response.get('MessageId')

Then the SQS itself has this Access Policy:

    {
      "Sid": "sqsQueueOrgSendPolicy",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": [
        "SQS:SendMessage"
      ],
      "Resource": "arn:aws:sqs:region:account-id:queue-name",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalOrgID": "o-9999999999"
        }
      }
    

Don't copy & paste policy into SQS Access Policy Web Console. I kept running into invalid JSON error. Do it using API call instead. 

Tuesday, September 22, 2020

Using Pipeline to Publish your powershell nuget module

Using Pipeline to Publish your powershell nuget module

Create a new Repo

Bring over all the files from previous instruction on Publishing your Powershell Nuget Module. 

Update Manifest file (this is to allow the pipeline to update the version for us):
ModuleVersion = '#{MODULEVERSION}#'

Go to Pipelines, Click New Pipeline

Select Azure Repos Git

Select Starter Pipeline

Clear content

Paste this

name: '$(BuildDefinitionName)_$(Build.BuildId)'

trigger:
 branches:
   include:
   - master

variables:
  major: '1'
  minor: '0'
  revision: $[counter(variables['minor'], 1)]
  MODULEVERSION: '$(major).$(minor).$(revision)'

In the assistant pane, search for Replace Tokens


Update the targetFiles to '**/*.psd1' and click Add. The resulting new Task
taskreplacetokens@3
  displayName'Replace Version Token'
  inputs:
    targetFiles'**/*.psd1'
    encoding'auto'
    writeBOMtrue
    actionOnMissing'warn'
    keepTokenfalse
    tokenPrefix'#{'
    tokenSuffix'}#'
    useLegacyPatternfalse
    enableTelemetryfalse

In the assistant pane, search for Nuget (just regular Nuget) and enter as follows. 

  • Command: pack
  • Path: **/*.nuspec
  • Automatic package versioning: Use an environment variable
  • Environment variable: MODULEVERSION
  • Additional build properties: MODULEVERSION=$(MODULEVERSION)

The resulting task:

taskNuGetCommand@2
  displayName'NuGet Pack'
  inputs:
    command'pack'
    packagesToPack'**/*.nuspec'
    versioningScheme'byEnvVar'
    versionEnvVar'MODULEVERSION'
    buildProperties'MODULEVERSION=$(MODULEVERSION)'

Back to assistant pane, search again for Nuget

  • Command: push
  • Target Feed: Point to your own feed
The resulting task (your feed ID will be unique to you):

taskNuGetCommand@2
  displayName'NuGet Push'
  inputs:
    command'push'
    packagesToPush'$(Build.ArtifactStagingDirectory)/**/*.nupkg;!$(Build.ArtifactStagingDirectory)/**/*.symbols.nupkg'
    nuGetFeedType'internal'
    publishVstsFeed'00000000-0000-0000-0000-000000000000'

Back to assistant pane, search for Publish Build Artifacts


  • Artifact Name: NugetPackage
  • Artifact Publish Location: Azure Pipeline

The resulting task:

taskPublishBuildArtifacts@1
  displayName'Publish Build Artifacts'
  inputs:
    PathtoPublish'$(Build.ArtifactStagingDirectory)'
    ArtifactName'NuGetPackage'
    publishLocation'Container'

That's it!

Save and run it. 
















AWS WAF log4j query

How to query AWS WAF log for log4j attacks 1. Setup your Athena table using this instruction https://docs.aws.amazon.com/athena/latest/ug/wa...