Wednesday, March 25, 2020

AWS Custom Config Cross-Account

How to AWS Custom Config Cross-Account

Lambda - Python

Account 999999999
  • This is where you'll host the Lambda code
Account 888888888
  • This is where you'll host the Config Rule

Account 999999999

Follow the previous instruction for the Python code, update the lambda_handler with following:

def lambda_handler(event, context):
    evaluations = []
    test_mode = False
    thisRegion = 'us-east-1'
    print('Event: ',event)
    invoking_event = json.loads(event["invokingEvent"])
    print('Invoking Event: ',invoking_event)
    notification_time = invoking_event["notificationCreationTime"]
    print('Invoked Time: ', notification_time)
    result_token = event["resultToken"]
    print('Result Token: ', result_token)
    # In our test event, we have defined the result_token to be XYZ
    if result_token == 'XYZ':
        test_mode = True
        
    # this is passed to us from Config Rule, if this exists, then Assume Role
    ruleParameters = json.loads(event["ruleParameters"])
    if 'executionRole' in ruleParameters.keys():
        print('Assume Role')
        executionRole = ruleParameters["executionRole"]
        print(executionRole)
        sts_client = boto3.client('sts')
        assume_role_response = sts_client.assume_role(RoleArn=executionRole, RoleSessionName="configLambdaExecution")
        credentials = assume_role_response['Credentials']
        config = boto3.client("config", region_name=thisRegion,
                        aws_access_key_id=credentials['AccessKeyId'],
                        aws_secret_access_key=credentials['SecretAccessKey'],
                        aws_session_token=credentials['SessionToken']
                       )
        
    else:
        credentials = []
        config = boto3.client("config")
        print('Self Run')
    # pass in credential and time stamp and get back eval result
    evaluations = evaluate_compliance(notification_time, credentials)
    print(evaluations)
    print(result_token)
    result = config.put_evaluations(
            Evaluations = evaluations,
            ResultToken = result_token,
            TestMode = test_mode
            )
    
    metaData = result["ResponseMetadata"]
    statusCode = metaData["HTTPStatusCode"]
    return {
        'statusCode': statusCode,
        'body': json.dumps(result)
    }

Be sure your execution role for this function has at least the following. The first statement allows the function to send logs to cloudwatch. The second statement allows this function to assume the role of Config_Role from another account.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:us-east-1:999999999:log-group:/aws/lambda/config_test:*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "logs:CreateLogGroup",
            "Resource": "arn:aws:logs:us-east-1:999999999:*"
        }
    ]
}

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::888888888:role/config_role
        }
    ]
}

Add a Resource-based policy to your Lambda function so that it can be used by another account. This can only be done via CLI or API call.
Run this CLI command using credential of account 99999999

aws lambda add-permission 
  --function-name config_test
  --region us-east-1 
  --statement-id 1001 
  --action "lambda:InvokeFunction" 
  --principal config.amazonaws.com 
  --source-account 888888888 


Account 888888888

Create a new role in this account with at least following permission. Be sure to have specific permission so that you can evaluate your resources.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "config:PutEvaluations"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

The new role also needs trust relationship to ALLOW it be assumed by the Lambda function

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::999999999:role/service-role/lambdaConfigRole"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}

Now create the new Config Rule
Provide the Lambda function that you created in account 99999999:


Set your trigger type and resource type(s).

Fill out the Rule parameter with the role that you want the Lambda function to use:

Done. Now try running the config rule manually by clicking on the blue Re-evaluate button in account 8888888888 and watch the Cloudwatch logs in account 999999999.

Monday, March 23, 2020

AWS Config Custom Rule

How to AWS Custom Config 

Lambda - Python

Want to know how to setup your lambda Python function for AWS Custom Config? Here's what you need to know to get started. 

Here's a really simplified version of the Python code that you need to get started. You should add some try and catch errors and parse whatever input you are looking for.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
def lambda_handler(event, context):
    # you can check what invoked it
    invoking_event = json.loads(event["invokingEvent"])
    # you only need this for resource change trigger
    configuration_item = invoking_event["configurationItem"]
    # you can check what rule parameters were passed in
    rule_parameters = json.loads(event["ruleParameters"])
 
    compliant, annotation = evaluate_compliance(some_argument)

    config = boto3.client("config")
    # this example assumes resource trigger, for time trigger, you can have the above function spit out a list of resources
    config.put_evaluations(
    Evaluations=[
            {
                "ComplianceResourceType": configuration_item["resourceType"],
                "ComplianceResourceId": configuration_item["resourceId"],
                "ComplianceType": compliant, 
                "Annotation": annotation,
                "OrderingTimestamp": configuration_item["configurationItemCaptureTime"]
            },
        ],
        ResultToken=result_token,
    )    
 
 
def evaluate_compliance(argument):
    compliant = "COMPLIANT"
    annotations = []
 if blah: 
  compliant = "COMPLIANT"
  annotations.append("blah")
 else:
  compliant = "NON_COMPLIANT"
  annotations.append("different blah")
 return compliant, " ".join(annotations)

To understand how to fill in the logic of your code, you can look at what you get in the event payload.

Inputs


Sample scheduled trigger:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
{
   "version":"1.0",
   "invokingEvent":"{\"awsAccountId\":\"999999999\",\"notificationCreationTime\":\"2020-03-23T02:42:51.511Z\",\"messageType\":\"ScheduledNotification\",\"recordVersion\":\"1.0\"}",
   "ruleParameters":"{\"executionRole\":\"arn:aws:iam::9999:role/blah\"}",
   "resultToken":"XYZ",
   "eventLeftScope":false,
   "executionRoleArn":"arn:aws:iam::9999999:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig",
   "configRuleArn":"arn:aws:config:us-east-2:999999999:config-rule/config-rule-kfabou",
   "configRuleName":"my-test-rule1",
   "configRuleId":"config-rule-kfabou",
   "accountId":"9999999999"
}
Above's invoking event:
1
2
3
4
5
6
{
   "awsAccountId":"999999999",
   "notificationCreationTime":"2020-03-23T02:42:51.511Z",
   "messageType":"ScheduledNotification",
   "recordVersion":"1.0"
}
Rule parameter is the values you provide in Key:Value pair under your Config rule setup



Sample resource change trigger:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
{
   "version":"1.0",
   "invokingEvent":"{\"configurationItemDiff\":null,\"configurationItem\":{\"relatedEvents\":[],\"relationships\":[{\"resourceId\":\"ANPASP666ZVAXGDCE5SDN\",\"resourceName\":\"Read_Only_EC2\",\"resourceType\":\"AWS::IAM::Policy\",\"name\":\"Is attached to CustomerManagedPolicy\"}],\"configuration\":{\"path\":\"/\",\"userName\":\"FoxySugar\",\"userId\":\"AIDASP666ZVA36XQGO6DD\",\"arn\":\"arn:aws:iam::9999999999:user/FoxySugar\",\"createDate\":\"2020-01-07T14:06:13.000Z\",\"userPolicyList\":[],\"groupList\":[],\"attachedManagedPolicies\":[{\"policyName\":\"Read_Only_EC2\",\"policyArn\":\"arn:aws:iam::9999999:policy/Read_Only_EC2\"}],\"permissionsBoundary\":null,\"tags\":[]},\"supplementaryConfiguration\":{},\"tags\":{},\"configurationItemVersion\":\"1.3\",\"configurationItemCaptureTime\":\"2020-03-23T02:02:28.440Z\",\"configurationStateId\":1584928948999,\"awsAccountId\":\"999999999\",\"configurationItemStatus\":\"ResourceDiscovered\",\"resourceType\":\"AWS::IAM::User\",\"resourceId\":\"AIDASP666ZVA36XQGO6DD\",\"resourceName\":\"FoxySugar\",\"ARN\":\"arn:aws:iam::9999999:user/FoxySugar\",\"awsRegion\":\"global\",\"availabilityZone\":\"Not Applicable\",\"configurationStateMd5Hash\":\"\",\"resourceCreationTime\":\"2020-01-07T14:06:13.000Z\"},\"notificationCreationTime\":\"2020-03-23T02:40:01.999Z\",\"messageType\":\"ConfigurationItemChangeNotification\",\"recordVersion\":\"1.3\"}",
   "ruleParameters":"{}",
   "resultToken":"XYZ==",
   "eventLeftScope":false,
   "executionRoleArn":"arn:aws:iam::99999999:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig",
   "configRuleArn":"arn:aws:config:us-east-2:999999999:config-rule/config-rule-kfabou",
   "configRuleName":"my-config-rule1",
   "configRuleId":"config-rule-kfabou",
   "accountId":"99999999999999"
}
Above's invoking event:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
{
   "configurationItemDiff":null,
   "configurationItem":{
      "relatedEvents":[

      
      ],
      "relationships":[
         {
            "resourceId":"ANPASP666ZVAXGDCE5SDN",
            "resourceName":"Read_Only_EC2",
            "resourceType":"AWS::IAM::Policy",
            "name":"Is attached to CustomerManagedPolicy"
        
         }
      
      ],
      "configuration":{
         "path":"/",
         "userName":"FoxySugar",
         "userId":"AIDASP666ZVA36XQGO6DD",
         "arn":"arn:aws:iam::9999999999:user/FoxySugar",
         "createDate":"2020-01-07T14:06:13.000Z",
         "userPolicyList":[

         
         ],
         "groupList":[

         
         ],
         "attachedManagedPolicies":[
            {
               "policyName":"Read_Only_EC2",
               "policyArn":"arn:aws:iam::9999999:policy/Read_Only_EC2"
            
            }
         
         ],
         "permissionsBoundary":null,
         "tags":[

         
         ]
      
      },
      "supplementaryConfiguration":{

      
      },
      "tags":{

      
      },
      "configurationItemVersion":"1.3",
      "configurationItemCaptureTime":"2020-03-23T02:02:28.440Z",
      "configurationStateId":1584928948999,
      "awsAccountId":"999999999",
      "configurationItemStatus":"ResourceDiscovered",
      "resourceType":"AWS::IAM::User",
      "resourceId":"AIDASP666ZVA36XQGO6DD",
      "resourceName":"FoxySugar",
      "ARN":"arn:aws:iam::9999999:user/FoxySugar",
      "awsRegion":"global",
      "availabilityZone":"Not Applicable",
      "configurationStateMd5Hash":"",
      "resourceCreationTime":"2020-01-07T14:06:13.000Z"
   
   },
   "notificationCreationTime":"2020-03-23T02:40:01.999Z",
   "messageType":"ConfigurationItemChangeNotification",
   "recordVersion":"1.3"
}



Output:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
   "Evaluations": [ 
      { 
         "Annotation": "string",
         "ComplianceResourceId": "string",
         "ComplianceResourceType": "string",
         "ComplianceType": "string",
         "OrderingTimestamp": number
      }
   ],
   "ResultToken": "string",
   "TestMode": boolean
}
This is from this documentation.

Explanation (Lines):
2: This is array of compliance report. You can include as many as you like.
4. This is optional description
5. This is the ID of the resource
6. This is the resource type, this must be one supported by Config. Click here.
7. This must be one of following:  
  • COMPLIANT
  • NON_COMPLIANT
  • NOT_APPLICABLE
  • INSUFFICIENT_DATA
8. This is timestamp
11. This must be the same ResultToken that function received from Config.
12. If this is set to true, then nothing is actually reported back to Config.  


With this in mind, you should be able to make sense of these sample AWS Config Lambda codes such as one found here.

Friday, March 13, 2020

Terraform Cloud current workspace name

Terraform Notes

How to get current workspace name

Working in TF Cloud and you want your tf file to pull your current workspace name. These are what I would call TF Cloud environment variables or global variables or backend data or stuff you can self reference. 

Here is my main.tf under my Github repo that is tied to my Terraform Cloud account. 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
variable "ATLAS_WORKSPACE_NAME"{
}

locals{
  version = "1.1"
}

resource "null_resource" "example" {
  triggers = {
      key   = "hello"
      value = formatdate("YYYY-MM-DD hh:mm:ssZZZZZ", timestamp())
  }
}

output "version"{
  value = local.version
}

output "workspace"{
  value = var.ATLAS_WORKSPACE_NAME  
}


The most important part is lines 1-2. Even though I would consider "ATLAS_WORKSPACE_NAME" to be global variable in a traditional sense, Terraform creates a dependency graph that will throw an error that says, "I don't see the variable that you are trying to use." However, declaring a "local" variable that has the same name as a "global" variable does not overwrite the global variable or change the scope the variable. Lines 8-13 is only here to ensure the Apply takes because without any resource TF Cloud will say nothing to do. Line 5 is here so that something actually changes that causes version of the file to change (which triggers a run in TF Cloud).

Here is the end result:


















Wait... there's more. What can I get more than the name of workspace? Run a local-exec for all environment variables. Add this resource block to the above code and re-run it.


resource "null_resource" "envs" {
  provisioner "local-exec" {
      command = "printenv"
  }
}

Based on my sampling, it appears that any variables that begins with "TF_VAR_" can be obtained.


TF_VAR_ATLAS_ADDRESS=https://app.terraform.io
TF_VAR_ATLAS_CONFIGURATION_NAME=Lambchop
TF_VAR_ATLAS_CONFIGURATION_SLUG=my_org_name/Lambchop
TF_VAR_ATLAS_CONFIGURATION_VERSION_GITHUB_BRANCH=master
TF_VAR_ATLAS_CONFIGURATION_VERSION_GITHUB_COMMIT_SHA=99999999999
TF_VAR_ATLAS_RUN_ID=run-J2AHxyXbkcT7VJDD
TF_VAR_ATLAS_WORKSPACE_NAME=Lambchop
TF_VAR_ATLAS_WORKSPACE_SLUG=my_org_name/Lambchop
TF_VAR_TFE_RUN_ID=run-J2AHxyXbkcT7VJDD


Here is the TF code to get couple more environment variables

variable "TFE_RUN_ID"{
}
output "TFE_RUN_ID"{
  value = var.TFE_RUN_ID 
}

variable "ATLAS_ADDRESS"{
}
output "ATLAS_ADDRESS"{
  value = var.ATLAS_ADDRESS
}













Wednesday, March 11, 2020

Terraform Lambda resource lifecycle

Terraform Notes

Lambda resource lifecycle and conditional tags

This example will create Lambda function and initially tag the resource with creation_date and modified_date. But will ONLY update the modified_date IF the python file's hash has changed.
- To prevent creation_date from updating each run, just add this to lambda resource under lifecycle's ignore_changes list.
- To prevent modified_date from updating each run, see the logic in locals below

  • If file does not exist, then set new modified_date
  • If file exists, but hash of the file has changed then set new modified_date


lambda resource

resource "aws_lambda_function" "test_lambda_function" {
  filename = "${path.module}/${var.zip_filename}"
  ## this is the standard handler for python function
  handler = "lambda_function.lambda_handler"
  ## this is just a function name
  function_name = var.function_name
  ## the role arn is obtained from below
  role = aws_iam_role.test_lambda_role.arn
  runtime = "python3.8"
  ## Need this source_code_hash to ensure function is updated when the zip is updated
  source_code_hash = local.new_file_hash
  ## these are the variables that can be used by the code
  environment {
    variables = {
      myvar = var.test
    }
  }
  tags = local.tags
  # Only attributes defined by the resource type can be ignored.
  #  last_modified and source_code_size is only here for illustration purposes.
  #  for tags, any NEW tag creation can't be ignored. 
  # if you create a tag from AWS console that isn't listed, then it will cause update to occur
  # if you set to ignore any tag, this will NOT update the tag after the first run
  lifecycle {
    ignore_changes = [
      last_modified,
      source_code_size,
      tags["creation_date"]
    ]
  }
}

readme file resource

to manage trigger of modified_date tag
resource "local_file" "readme"{
  content = jsonencode({"name"=var.function_name,"lastmodified"=local.new_modified_date,"hash"=local.new_file_hash})
  filename = local.readme_file
}

data call 

to create the zip file from .py file
data "archive_file" "init"{
  type = "zip"
  output_path = local.zip_file_path
  source_dir = "${path.module}/${var.function_filepath}/"
}

variables

variable "region" {
  default = "us-east-1"
}

variable "function_name"{
  default = "test"
}

variable "function_filepath"{
  default = "files"
}

variable "zip_filename"{
  default = "test.zip"
}

variable "test"{
  default = "hello world"
}

locals


locals{
    ## Readme file should reside with the invoking root, because this file can be deleted
    readme_file = "${path.root}/readme.json"
    zip_file_path = "${path.module}/${var.zip_filename}"
    new_file_hash = data.archive_file.init.output_base64sha256
    current_time_stamp = formatdate("YYYY-MM-DD hh:mm:ssZZZZZ", timestamp())
}

## If readme files exists, use existing data
locals{
    ## if this is a new file, then old_mod_date will be current date
    old_modified_date = fileexists(local.readme_file) ? jsondecode(file(local.readme_file)).lastmodified : local.current_time_stamp
    ## if this is a new file, then old_file_hash will be the current hash
    old_file_hash     = fileexists(local.readme_file) ? jsondecode(file(local.readme_file)).hash : local.new_file_hash
}

locals{
    ## if old and new hash are equal then new_mod_date will be old_mod_date, otherwise use the current date, 
    new_modified_date = local.new_file_hash == local.old_file_hash ? local.old_modified_date : local.current_time_stamp
}

## Tags
locals{
  tags = {
    "creation_date" = local.current_time_stamp
    "modified_date" = local.new_modified_date
    "keep_until"    = "0"
    "hash"          = local.new_file_hash
  }
}


Monday, March 9, 2020

AWS Lambda Python url request

Web call in Lambda Python 

...without using requests from botocore.vendored module


Recently, it became not possible to use request module from botocore.vendored. So here is a workaround and a handy little ditty to test your network configuration from inside your Lambda function.

I use a Lambda variable called 'hostname'


import json
import socket
import os
import urllib.request


def lambda_handler(event, context):
    # TODO implement
    ABOUTME = socket.gethostname()    
    MYIP = socket.gethostbyname(ABOUTME)   
    print("About me...")
    print(ABOUTME, "=>", MYIP)
    
    print("Check outside connection...")
    HOSTNAME = os.environ['hostname']
    IP = socket.gethostbyname(HOSTNAME)
    print(HOSTNAME, "=>",IP)
    
    URL = "https://api.ipify.org?format=json"
    req = urllib.request.Request(URL)
    response = urllib.request.urlopen(req)
    output = response.read().decode('utf8')
    fromOutsideIP = json.loads(output)["ip"]
    print('Your advertised source IP is',fromOutsideIP)
    
    return {
        'statusCode': 200,
        'body': ('Done')
    }

AWS WAF log4j query

How to query AWS WAF log for log4j attacks 1. Setup your Athena table using this instruction https://docs.aws.amazon.com/athena/latest/ug/wa...