Friday, December 4, 2020

AWS EventBridge to SQS

How to create new EventBridge (CloudWatch Events) to send message to existing SQS queue

Event to SQS

If you want to send alerts directly from EventBridge to SQS, you must modify your SQS to accept it from the Rule. 

{
  "Sid": "EventsToMyQueue",
  "Effect": "Allow",
  "Principal": {
     "Service": "events.amazonaws.com"
  },
  "Action": "sqs:SendMessage",
  "Resource": "arn:aws:sqs:region:account-id:queue-name",
  "Condition": {
    "ArnEquals": {
      "aws:SourceArn": "arn:aws:events:region:account-id:rule/rule-name"
    }
  }
}

Reference: SQS Permissions.

This is fine if you know the arn of the rule in advance. However, if you require this SQS queue to receive messages from rules unknown and you want to limit who or what can send to this queue then you can't seem to use direct to SQS. 

Because we're on an AWS Org, I tried this rule instead but unfortunately, services doesn't bring the necessary PrincipalOrgId information. Or I did it wrong... 

{
      "Sid": "doesnt_work",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sqs:SendMessage",
      "Resource": "arn:aws:sqs:region:account-id:queue-name",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalOrgID": "o-9999999999"
        }
      }
    }

Event to Lambda to SQS

Instead of sending directly to SQS, we can create a Lambda in conjunction with EventBridge rule and attach the necessary role with permission onto the Lambda function and then use PrincipalOrgId condition from SQS.

When you create your EventBridge and Lambda target, following permission needs to be attached to the Lambda.

{
  "Effect": "Allow",
  "Action": "lambda:InvokeFunction",
  "Resource": "arn:aws:lambda:region:account-id:function:function-name",
  "Principal": {
    "Service": "events.amazonaws.com"
  },
  "Condition": {
    "ArnLike": {
      "AWS:SourceArn": "arn:aws:events:region:account-id:rule/rule-name"
    }
  },
  "Sid": "InvokeLambdaFunction"
}

Reference: Lambda Permissions.

And the IAM Role attached to the Lambda function needs to have (at least) this policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SidOrgLook",
            "Effect": "Allow",
            "Action": "sqs:SendMessage",
            "Resource": "arn:aws:sqs:region:account-id:queue-name"
        }
    ]
}

The lambda would have a function like this to send to queue:

def send_message(message,region):
    queue_name = os.environ.get("queue_name","generic_queue")
    sqs = boto3.resource('sqs', region_name = region)
    try:
        queue = sqs.get_queue_by_name(QueueName=queue_name)
    except:
        print("Error in fetching queue from name")
        return False
    try:
        response = queue.send_message(MessageBody=message)
    except:
        print("Error sending message")
        return False
    return response.get('MessageId')

Then the SQS itself has this Access Policy:

    {
      "Sid": "sqsQueueOrgSendPolicy",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": [
        "SQS:SendMessage"
      ],
      "Resource": "arn:aws:sqs:region:account-id:queue-name",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalOrgID": "o-9999999999"
        }
      }
    

Don't copy & paste policy into SQS Access Policy Web Console. I kept running into invalid JSON error. Do it using API call instead. 

Tuesday, September 22, 2020

Using Pipeline to Publish your powershell nuget module

Using Pipeline to Publish your powershell nuget module

Create a new Repo

Bring over all the files from previous instruction on Publishing your Powershell Nuget Module. 

Update Manifest file (this is to allow the pipeline to update the version for us):
ModuleVersion = '#{MODULEVERSION}#'

Go to Pipelines, Click New Pipeline

Select Azure Repos Git

Select Starter Pipeline

Clear content

Paste this

name: '$(BuildDefinitionName)_$(Build.BuildId)'

trigger:
 branches:
   include:
   - master

variables:
  major: '1'
  minor: '0'
  revision: $[counter(variables['minor'], 1)]
  MODULEVERSION: '$(major).$(minor).$(revision)'

In the assistant pane, search for Replace Tokens


Update the targetFiles to '**/*.psd1' and click Add. The resulting new Task
taskreplacetokens@3
  displayName'Replace Version Token'
  inputs:
    targetFiles'**/*.psd1'
    encoding'auto'
    writeBOMtrue
    actionOnMissing'warn'
    keepTokenfalse
    tokenPrefix'#{'
    tokenSuffix'}#'
    useLegacyPatternfalse
    enableTelemetryfalse

In the assistant pane, search for Nuget (just regular Nuget) and enter as follows. 

  • Command: pack
  • Path: **/*.nuspec
  • Automatic package versioning: Use an environment variable
  • Environment variable: MODULEVERSION
  • Additional build properties: MODULEVERSION=$(MODULEVERSION)

The resulting task:

taskNuGetCommand@2
  displayName'NuGet Pack'
  inputs:
    command'pack'
    packagesToPack'**/*.nuspec'
    versioningScheme'byEnvVar'
    versionEnvVar'MODULEVERSION'
    buildProperties'MODULEVERSION=$(MODULEVERSION)'

Back to assistant pane, search again for Nuget

  • Command: push
  • Target Feed: Point to your own feed
The resulting task (your feed ID will be unique to you):

taskNuGetCommand@2
  displayName'NuGet Push'
  inputs:
    command'push'
    packagesToPush'$(Build.ArtifactStagingDirectory)/**/*.nupkg;!$(Build.ArtifactStagingDirectory)/**/*.symbols.nupkg'
    nuGetFeedType'internal'
    publishVstsFeed'00000000-0000-0000-0000-000000000000'

Back to assistant pane, search for Publish Build Artifacts


  • Artifact Name: NugetPackage
  • Artifact Publish Location: Azure Pipeline

The resulting task:

taskPublishBuildArtifacts@1
  displayName'Publish Build Artifacts'
  inputs:
    PathtoPublish'$(Build.ArtifactStagingDirectory)'
    ArtifactName'NuGetPackage'
    publishLocation'Container'

That's it!

Save and run it. 
















Tuesday, September 8, 2020

Terraform Module For_Each Subnets

Working with Module For_Loop in Terraform

With 0.13 Terraform is officially supporting Module For_Each Loop. 

Given the below Subnet module

Subnet Module

variable "subnets"{
  default = {
    "public" = {
      "0" = {
        "az" = 0
        "cidr" = "10.1.0.0/18"
      }
      "1" = {
        "az" = 1
        "cidr" = "10.1.64.0/18"
      }
    }
    "private" = {
      "0" = {
        "az" = 0
        "cidr" = "10.1.128.0/19"
      }
      "1" = {
        "az" = 1
        "cidr" = "10.1.160.0/19"
      }
      "2" = {
        "az" = 0
        "cidr" = "10.1.192.0/19"
      }
      "3" = {
        "az" = 1
        "cidr" = "10.1.224.0/19"
      }
    }
  }
}

locals {
  private_subnets = lookup(var.subnet_map,"private",{})
}

resource "aws_subnet" "public_subnets" {
  for_each                = local.public_subnets
  vpc_id                  = var.vpc_id
  cidr_block              = each.value["cidr"]
  availability_zone       = data.aws_availability_zones.az.names[each.value["az"]]
  map_public_ip_on_launch = "false"
}

output "public_subnets" {
  value = aws_subnet.public_subnets
}

  We can call it multiple times via this module call

variable "subnets_map"{
  default = {
    "10.20.0.0/24" = {
      "public" = {
        "0" = {
          "az" = 0
          "cidr" = "10.20.0.0/26"
        }
        "1" = {
          "az" = 1
          "cidr" = "10.20.64.0/26"
        }
      }
      "private" = {
        "0" = {
          "az" = 0
          "cidr" = "10.20.128.0/26"
        }
        "1" = {
          "az" = 1
          "cidr" = "10.20.160.0/26"
        }
      }
    }
  "10.30.0.0/24" = {
      "public" = {
        "0" = {
          "az" = 0
          "cidr" = "10.30.0.0/26"
        }
        "1" = {
          "az" = 1
          "cidr" = "10.30.64.0/26"
        }
      }
      "private" = {
        "0" = {
          "az" = 0
          "cidr" = "10.30.128.0/26"
        }
        "1" = {
          "az" = 1
          "cidr" = "10.30.160.0/26"
        }
      }
    }
  }
}

module "many_subnets" {
  source               = "../subnets"
  for_each             = var.subnets_map
  vpc_id               = var.vpc_id
  subnet_map           = each.value
}

outputs "public_subnets"{
  value = module.many_subnets.public_subnets
}

 The content of module.many_subnets looks like this:

module.many_subnets["10.30.0.0/24"] = 
  private_subnets = {
    0 = {
      arn = "arn:aws:ec2:us-east-1:xxxx:subnet/subnet-xxx"
      assign_ipv6_address_on_creation = false
      availability_zone = "us-east-1a"
      availability_zone_id = "use1-az2"
      cidr_block = "10.30.0.0/26"
      id = "subnet-xxxx"
      ipv6_cidr_block = ""
      ipv6_cidr_block_association_id  = ""
      map_public_ip_on_launch = false
      outpost_arn = ""
      owner_id = "xxxx"
      tags = {}
      timeouts = null
      vpc_id = "vpc-xxx"
      }
    1 = {...}
  }

After everything was done, I only wanted Subnet CIDR and ID, I could have gone back to the original module and updated the output, but I just did this local variable instead:

  private_subnets = {
    for key, value in flatten([
      for item in module.many_subnets : [
        for child in item.private_subnets : {
          "id"   = child.id
          "cidr" = child.cidr_block
        }
      ]
    ]) : (value["cidr"]) => (value["id"])
  }

Thursday, July 30, 2020

Publishing Powershell Nuget module to Azure DevOps

Publishing your powershell nuget module to your Azure DevOps Artifacts


Log into your Azure DevOps Project
Click Artifacts
Create Feed
Give it a name, visibility, and Scope
After your feed is created, click Connect to feed and click NuGet to get the package source (you can get this from any type). It'll look like this:
https://pkgs.dev.azure.com/ORG/PROJECT/_packaging/FEED/nuget/v3/index.json
Click on your settings icon and select Personal Access Tokens
Create a new token and give it at least 
  • Work Items (Read)
  • Packaging (Read & Write)
Save this token for later use.

Create a new working directory for your module
mkdir c:\mud
Go into this directory.
Create a new manifest file here.
New-ModuleManifest -Path .\mud.psd1
Create a new module root file.
New-item .\mud.psm1
Update the new manifest file with at least following
RootModule = 'mud.psm1'
ModuleVersion = '1.1.0'
FunctionsToExport = @('function1','function2')
FileList = @('file1.ps1','file2.ps1')
  • Functions are list of functions that you want exposed from this module.
  • Files are list of files that will be consumed by this module. These files must reside at the root of this folder. 
Create a nuget spec file using the name of your module (be sure you have downloaded nuget cli)
nuget spec mud
Go into this new file (mud.nuspec)
  • Be sure the version matches (with mud.psd1)
  • Be sure to update ProjectURL and IconURL or just remove it altogether
  • Remove the default dependency
Package your module
nuget pack mud.nuspec
Add the source
nuget sources Add -Name "myfeed" -Source $source_push -username $username -password $pat
  • source_push is your package source address from above
  • username is probably your email address
  • pat is your Personal Access Token you got earlier
Push this package
nuget push -Source "myfeed" -ApiKey AzureDevOpsServices .\mud.nupkg

That's it. Now you can browse, find, and install the above module. 
> Register-PSRepository -Name $repo_name -SourceLocation $source_repo -InstallationPolicy Trusted
> Get-PSRepository
> find-module -Repository $repo_name -credential $credsAzureDevopsServices
> Install-Module -Name $module_name -Repository $repo_name -credential $credsAzureDevopsServices










Tuesday, July 28, 2020

GitHub and Terraform Cloud

Getting started with Terraform Cloud and GitHub

Creating a new workspace (linked to GitHub)


Click Workspaces

Click New workspace
Choose GitHub.com
Click on the provided link
Choose a repository. You can choose any repo. 
Click Create workspace

Setting Auto Apply

Go to your workspace
Click on Settings >> General
Now when you update a file in the above GitHub repo that you associated with this workspace, it'll cause an Apply to occur. 

Creating a private module


From GitHub, create a new repository. Terraform Cloud only supports 1 module per reository. They MUST be named in following format "terraform-<provider>-<unique name>" In my example, this will be called "terraform-aws-testmodule" because I am creating an aws module.

After you added all your files, you must create a Release and Tag it in this format "v#.#.#" 










Next, go to your Terraform Cloud account

Click Module

Click Add module
Choose GitHub.com 












Click on the link provided and enter the information shown in your GitHub account
Select your repository from the list

If you didn't name the repo correctly, you won't see it. If you didn't tag it properly, you'll also get an error. 
If it worked, you'll see a new module following the version number you tagged and a provision instructions. 

Go back to GitHub and make a second release with new tag. Then go back to Terraform Cloud and refresh the module page from above. 


Using a private module

Go to your code that is associated with your Terraform Cloud workspace. Update the code to include the provision block provided from above:
module "testmodule" {
  source  = "app.terraform.io/BLAH/testmodue/aws"
  version = "2.0.0"
}

In my workspace, this auto-runs on commit.


Using Terraform Cloud API (Python)

Create a separate virtual environment for your Terraform Cloud code (recommended).
Create venv
> py -m venv tfc_env
Activate it
\tfc_env\Scripts\activate
Deactivate it
> deactivate

Install this 
> pip install tfc_client --trusted-host pypi.org --trusted-host files.pythonhosted.org

There are other options as well: 

Go to Terraform Cloud and retrieve a Token under User Settings

Here's a sample code to interact with Terraform Cloud (more example at above links)
## You need to activate this virtual env before you run this:
##> .\python\Scripts\activate
## When you are done, you should deactivate it:
##> deactivate
import os
## Doc for this is here https://github.com/adeo/iwc-tfc-client
## pip install tfc_client --trusted-host pypi.org --trusted-host files.pythonhosted.org
from tfc_client import TFCClient
from tfc_client.enums import (
    RunStatus,
    NotificationTrigger,
    NotificationsDestinationType,
)
from tfc_client.models import VCSRepoModel

##Needed this Self-Signed Cert when working on VPN
os.environ["REQUESTS_CA_BUNDLE"]="./mycert.pem"
#$env:REQUESTS_CA_BUNDLE="./mycert.pem"
# Instanciate the client
## Get the token from web console and paste it into the file
token = open("token.txt", "r").read()
client = TFCClient(token=token)

# Retreive any object type by ID from the client
my_org = client.get("organization", id="xxxxxxxx")
my_ws_byID = client.get("workspace", id="ws-111111111")
my_ws_byName = my_org.workspace(name="7777777777")
my_run = client.get("run", id="run-777777777777777")
my_var = client.get("var", id="test")

# To retreive all workspaces:
for ws in my_org.workspaces:
    print(ws.name)

print(my_run)
print(my_var)
#my_run = my_ws_byName.create("run", message="Run run run")

If you need full access to their available API, you should consider using their tfe-go library. 







































Thursday, July 16, 2020

Terraform Subnet Splitting

How to dynamically split subnets in Terraform

Using cidrsubnets: 

The one without "s" returns a single subnet whereas the other returns a set of subnets. The "newbits" are used to determine how many additional bits of netmask to use in creating subsequent subnets. 
For example: 10.1.0.0/16. First two octets are not important. To simplify explanation, I'm going to use binary notation of last 2 octets. Here we're asking for 2 subnets with each having one more netmask. 
 
 
output "example"{
    value = cidrsubnets("10.1.0.0/16",1,1)
}

Result:
example = [
  "10.1.0.0/17",
  "10.1.128.0/17",
]

This is allowed because we can have 2 additional subnets with that netmask
  0.0 to 127.255    <- possible range
  00000000.00000000 <- network
  10000000.00000000 <- mask
  
  128.0 to 128.255
  10000000.00000000
  10000000.00000000

If you try to get another subnet, you'll get an error:
Error: Invalid function argument

  on main.tf line 50, in output "example":
  50:     value = cidrsubnets("10.1.0.0/16",1,1,1)

Invalid value for "newbits" parameter: not enough remaining address space for
a subnet with a prefix of 17 bits after 10.1.128.0/17.

Following the above logic, you can see that you can use 2 "newbits" to create 4 subnets.
output "example"{
    value = cidrsubnets("10.1.0.0/16",2,2,2,2)
}

example = [
  "10.1.0.0/18",
  "10.1.64.0/18",
  "10.1.128.0/18",
  "10.1.192.0/18",
]

==========================
  0.0 to 63.255  
  00000000.00000000
  11000000.00000000
  
  64.0 to 127.255
  01000000.00000000
  11000000.00000000
  
  128.0 to 191.255
  10000000.00000000
  11000000.00000000
  
  192.0 to 255.255
  11000000.00000000
  11000000.00000000

And so, using log base 2 to the desired number of subnets, you can calculate how many minimum "newbits" you need to use. 

Using cidrsubnet


However, a big issue here is calling cidrsubnets won't allow you to pass in dynamic number of arguments to create dynamic number of subnets. That's when you can use cidrsubnet and pass in the nth subnets.

This gives us the 4th subnet (it's zero based index):
output "example"{
    value = cidrsubnet("10.1.0.0/16",2,3)
}

example = 10.1.192.0/18

But of course tricky part of Terraform is that there isn't a convenient way to do an index for loop like this:
for (i=0; i<5; i++){

}


Index Looping


You can hack your way around this by creating a map or list and doing a for loop across it. 

Method 1: Pre-populated map of counter array. This method will use the desired index that matches the key in the counter_map. 
locals{
    counter_map={
        1=[0],
        2=[0,1],
        3=[0,1,2],
        4=[0,1,2,3],
        5=[0,1,2,3,4]
    }
    private_subnets = [for item in local.counter_map[var.private_count]: cidrsubnet(local.subnets[1], local.private_count_newbit, item)]
}

Method 2: One long counter array. This method pulls all the numbers up to the desired index value. 
locals{
    counter_set_all = [0,1,2,3,4,5,6,7,8,9]
    public_subnets = [for item in [for item in local.counter_set_all : item  if item < var.public_count]: cidrsubnet(local.subnets[0], local.public_count_newbit, item)]}

Putting all the pieces together


variable "public_count_weight"{
    default = 1
    description = "newbit weight given to subnet, lower means bigger subnet"
} 
variable "private_count_weight"{
    default = 1
    description = "newbit weight given to subnet, lower means bigger subnet"
}
    
variable "private_count"{
    default = 2
}

variable "public_count"{
    default = 2
}

locals{
    ## newbits can be calculated from the number of subnets desired by looking at the binary log to the desired subnet
    ## If 2 subnets are required, this requires 1 more bit in the netmask  (2^1 = 2 or log base 2 of 2 = 1)
    ## If 4 subnets are required, this requires 2 more bits in the netmask (2^2 = 4 or log base 2 of 4 = 2) 
    ## If 8 subnets are required, this requires 3 more bits in the netmask (2^3 = 8 or log base 2 of 8 = 3)
    public_count_newbit  = ceil(log(var.public_count  , 2 ))
    private_count_newbit = ceil(log(var.private_count , 2 ))
    ## split the initial CIDR into two, one for public and one for private
    subnets = cidrsubnets("10.1.0.0/16", var.public_count_weight, var.private_count_weight)
    ## Split each of the half from above for desired number of subnet in each type
    public_subnets = var.public_count == 1 ? [local.subnets[0]]:[for item in [for item in local.counter_set_all : item  if item < var.public_count]: cidrsubnet(local.subnets[0], local.public_count_newbit, item)]
    private_subnets = var.private_count == 1 ? [local.subnets[1]]:[for item in local.counter_map[var.private_count]: cidrsubnet(local.subnets[1], local.private_count_newbit, item)]
}

locals{
    counter_map={
        1=[0],
        2=[0,1],
        3=[0,1,2],
        4=[0,1,2,3],
        5=[0,1,2,3,4]
    }
    counter_set_all = [0,1,2,3,4,5,6,7,8,9]

}

output "subnets" {
    value = local.subnets
}

output "private"{
    value = local.private_subnets
}

output "public"{
    value = local.public_subnets
}


Sunday, June 28, 2020

AWS ABAC

Configuring Policy for AWS Attribute Based Access Control

Below policies are based off of this AWS Tutorial.

Scenario 1: 
  • Allow Action if department tag on the resource does NOT exist.
  • Allow Action if department tag (if exists) matches between resource and principal. 
  • Deny adding or removing department tag unless it matches the principal's
Result:
  • The "aws:RequestTag" condition lets me edit a description of an item that was missing department tag.
  • The "aws:ResourceTag" condition (from empty tag set...)
    • ALLOW create then delete a new tag "DEPT" = "None"
    • ALLOW create then delete a new Tag dEPartment = "IT" (IT is also the principalTag/department) 
    • DENY creation of a new Tag department = "HR"


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllActionsSecretsManagerSameDepartment",
            "Effect": "Allow",
            "Action": "secretsmanager:*",
            "Resource": "*",
            "Condition": {
                "StringLikeIfExists": {
                    "aws:RequestTag/department": "${aws:PrincipalTag/department}",
                    "aws:ResourceTag/department": "${aws:PrincipalTag/department}"
                }
            }
        },
        {
            "Sid": "AllResourcesSecretsManagerNoTags",
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetRandomPassword",
                "secretsmanager:ListSecrets"
            ],
            "Resource": "*"
        }
    ]
}

Scenario 2: 
  • Same rule as scenario 1
  • Only Allow HR department (or no department designation) to use secretsmanager 
Result:
  • Only the users whose principalTag/department  is "HR" may use department tag on a secretmanager resource and control it
We had to add lines 10-12 because we can't have same keys at the same level ("aws:RequestTag/department" was already being used on line 14).


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllActionsSecretsManagerSameDepartment",
            "Effect": "Allow",
            "Action": "secretsmanager:*",
            "Resource": "*",
            "Condition": {
                "StringEqualsIfExists": {
                    "aws:RequestTag/department": "HR"
                },
                "StringLikeIfExists": {
                    "aws:RequestTag/department": "${aws:PrincipalTag/department}",
                    "aws:ResourceTag/department": "${aws:PrincipalTag/department}"
                }
            }
        },
        {
            "Sid": "AllResourcesSecretsManagerNoTags",
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetRandomPassword",
                "secretsmanager:ListSecrets"
            ],
            "Resource": "*"
        }
    ]
}

Tuesday, June 23, 2020

AWS Policy Conditions

AWS Policy Conditions
References:


Multiple Conditions

Multiple conditions are logical AND
When you are comparing a single KEY to multiple VALUES, then it is logical OR

These two below conditions have similar result:
      "Condition": {
        "StringEquals": {
          "SAML:aud": "https://signin.aws.amazon.com/saml",
          "aws:RequestTag/department": [
            "HR",
            "IT"
          ]
        }
      }


      "Condition": {
        "StringEquals": {
          "SAML:aud": "https://signin.aws.amazon.com/saml"
        },
        "StringLike": {
          "aws:RequestTag/department": [
            "HR",
            "IT"
          ]
        }
      }

The reason you can't split the first example into two "StringEquals" is because JSON won't permit you to have two identical Keys at the same level. But they would both read as first condition AND second condition where second condition checks if aws:RequestTag/department is HR OR IT.


ForAllValues vs ForAnyValues qualifier 

ForAll works like logical AND to each item in the keys being compared to the keys being compared against. 

"Condition": {
    "ForAllValues:StringEquals": {
        "dynamodb:Attributes": [
            "ID",
            "Message",
            "Tags"
        ]
    }
}

This reads as for all values within dynamodb:Attributes (this is the key being requested in the action) match against this list (we'll call it List B)

Assume the content of dynamdb:Attributes is as follows: [ID,Message,Tags,UserName]. 
So the checks would be
  1. Is there ID in List B : Yes
  2. Is there Message in List B: Yes
  3. Is there Tags in List B: Yes
  4. Is there UserName in List B: No
Because of the 4th check, this returns False.

Assume the content of dynamdb:Attributes is as follows: [ID,Tags]. 
So the checks would be
  1. Is there ID in List B : Yes
  2. Is there Tags in List B: Yes
Because they both return Yes, this returns True.

If dynamdb:Attributes is empty, this also returns True.

Whereas ForAny works like logical OR. Sometimes, this may end with same result.

"Condition": {
    "ForAnyValues:StringEquals": {
        "dynamodb:Attributes": [
            "ID",
            "Message",
            "Tags"
        ]
    }
}

This reads as for any values within dynamodb:Attributes (this is the key being requested in the action) match against this list (we'll call it List B)

Assume the content of dynamdb:Attributes is as follows: [ID,Message,Tags,UserName]. 
So the checks would be
  1. Is there ID in List B : Yes
Because it matches at least one, this returns True.

Assume the content of dynamdb:Attributes is as follows: [ID,Tags]. 
So the checks would be
  1. Is there ID in List B : Yes
Again, at least one is Yes, this returns True.

Assume the content of dynamdb:Attributes is as follows: [UserName,DateStamp]. 

  1. Is there UserName in List B: No
  2. Is there DateStamp in List B: No
Because it could not find any matching items, this returns False.

Although, in this case if dynamdb:Attributes is empty, this will return False.





















Friday, June 19, 2020

AWS DynamoDB + Python

AWS DynamoDB with Python

Create a DynamoDB table as follows:

Table name: TestTable
Primary partition key: TestID (String)

There are two functions.

The updateTable is to update if the key item is found or add if the key is NOT found. It will add the Nickname "column" if it's not already there.

The getContent is to retrieve all content from the table where the nickname is not 'Frank'. The return line also has a condition that if it's empty then return an empty list.  FilterExpression supports various comparison operators including, 'eq' for 'equals'; 'lt' for 'less than'; or 'lte' for 'less than or equal to'

import boto3
from boto3.dynamodb.conditions import Key, Attr
profile = "default"
session = boto3.Session(profile_name=profile)
thisRegion = "us-east-1"
thisResource = session.resource('dynamodb', region_name=thisRegion)

def updateTable(thisId,name):
    table = thisResource.Table('TestTable')
    response = table.update_item(
        Key={
            'TestID': thisId
        },
        UpdateExpression='set Nickname = :nickname',
        ExpressionAttributeValues={
            ':nickname': name
        },
        ReturnValues="UPDATED_NEW"
    )

def getContent(filterString):
    table = thisResource.Table('TestTable')
    response = table.scan(
        FilterExpression=Attr('Nickname').ne(filterString)
    )
    return(response.get('Items',[]))

updateTable('111111','Joe')
updateTable('111121','Frank')
updateTable('111131','Hank')
getContent(filterString='Frank')

Result:

[{'TestID': '111111', 'Nickname': 'Joe'},
 {'TestID': '111131', 'Nickname': 'Hank'}]

AWS WAF log4j query

How to query AWS WAF log for log4j attacks 1. Setup your Athena table using this instruction https://docs.aws.amazon.com/athena/latest/ug/wa...