Thursday, March 21, 2019

Lambda write & read DynamoDB

Writing and Reading from DynamoDB using Lambda Python

...using API Gateway as trigger.

Below is a simple example of writing to and reading from a DynamoDB table. This code would be triggered by a webpage that sends a JSON package via configured API Gateway. And it would return a JSON package back to the website. Be sure the Role has the necessary permission to read and write to DynamoDB table being used.
import json
import decimal
import boto3
from boto3.dynamodb.conditions import Key, Attr

# Helper class to convert a DynamoDB item to JSON.
class DecimalEncoder(json.JSONEncoder):
    def default(self, o):
        if isinstance(o, decimal.Decimal):
            if o % 1 > 0:
                return float(o)
            else:
                return int(o)
        return super(DecimalEncoder, self).default(o)

def lambda_handler(event, context):
    # TODO implement
    #define DynamoDB object
    dynamodb = boto3.resource('dynamodb')
    #define which table we want to use
    table = dynamodb.Table("yourOwnTable")
    #print the status of the table
    print(table.table_status)
    
    #Get the body string from the event input json
    #we convert this to json so we can use it easier
    body = json.loads(event['body'])
    #we enter this into the DB
    respose1 = table.put_item(
        Item={
            'RequestTime': body['myFormVariables']['RequestTime'],
            'User': body['myFormVariables']['User'],
            'entryID': body['myFormVariables']['entryID'],
            'Field1': body['myFormVariables']['Field2'],
            'Field2': body['myFormVariables']['Field1']
        }
    )

    print("Table Output")
    ##get it all
    response2 = table.scan()
    ##Get only the objects that match the given key
    #response = table.query(KeyConditionExpression=Key('entryID').eq("L3dVGMDS7yqzdoxCxOm4fg"))
    
    #Print all values in the response
    for i in response2['Items']:
        print(i['entryID'], ":", i['Field1'], ":", i['Field2'], ":", i['RequestTime'])
    
    #Just to illustrate how I handled cognito user information that was passed in
    cognito = event['requestContext']['authorizer']['claims']['cognito:username']
    
    return {
        'statusCode': 200,
        'body': json.dumps(response2, cls=DecimalEncoder)
    }

Test Sample

Here is a test event you can use to test your code. 

{
  "path": "/entry",
  "httpMethod": "POST",
  "headers": {
    "Accept": "*/*",
    "Authorization": "eyJraWQiOiLTzRVMWZs",
    "content-type": "application/json; charset=UTF-8"
  },
  "queryStringParameters": null,
  "pathParameters": null,
  "requestContext": {
    "authorizer": {
      "claims": {
        "cognito:username": "the_username"
      }
    }
  },
  "body": "{\"myFormVariables\":{
  \"RequestTime\":"2019-01-11T05:56:22.517Z\",
  \"User\":\"the_username\",
  \"entryID\":\"L3dVGMDS7yqzdoxCxOm4f1\",
  \"Field1\":47,
  \"Field2\":122}}"
}

Output


Sunday, March 10, 2019

Lambda read S3 object with S3 trigger

Configuring your Lambda function to read S3 object with S3 upload trigger


Are you new to Lambda and haven't quite got the test case figured out for S3 upload trigger? In the steps below, you'll build a new Python Lambda function from a blueprint and use the CloudWatch log to populate a new test case. This way you don't have to upload just to test your new function.
  1. Create a new Lambda Function
  2. Select "Use a blueprint"
  3. Search for "S3"
  4. Select "s3-get-object-python3"
  5.  Configure it using mostly default settings
    1. Function Name: of your choice
    2. Create a new role from AWS policy template
    3. Role Name: of your choice
    4. Select your own trigger bucket
    5. Select all object create event
    6. Leave prefix and Suffix blank
    7. Enable trigger
    8. Select Create Function
  6.  Uncomment line 12 that begins print('Received event: '
  7.  Upload a file to the bucket you select in the previous step
  8.  From the Lambda function console, go to Monitoring
  9. Go to View logs in CloudWatch - this will open a new tab into CloudWatch
  10. Click the latest and probably ONLY log stream
  11. View in TEXT (not rows)
  12. Copy the text like below as shown in your log screen into your clipboard
  13. Now go back to your Lambda function console
  14. Up by the top between Action and Test, click on the drop down and select Configure Test Event
  15. Create new test event (it does not matter what template you choose)
  16. Give it a name
  17. In the body, paste the string from previous step that you put in your clipboard (overwrite existing text)
  18. Save and then click Test
  19. Your Lambda function will behave as if you've just uploaded that same file again
  20. Now go edit the function to do what you want and test easily!

Saturday, March 9, 2019

Powershell Update ENI's DNS Tag

Update AWS network interface's Tag with its DNS entry (PowerShell 3.0)

Want to keep a tag that contains the DNS entry of an IP address of your ENIs?

This PowerShell function calls nslookup (Windows native) and adds a tag to ENI with the returned value.

You can also find me here.


################################################
#
# Get all the reserved ENI in the VPC and
#  create a new tag based on the 
#  responding nslookup on the Private IP
#
#
#  (\_/)
#  (>.<)
# (")_(")
#
#################################################
# Get ENIs with ID, Status, PrivateIP, and DNS Tag
$ip_raw = aws ec2 describe-network-interfaces --filter "Name=vpc-id,Values=vpc-xxxxxxx" --query "NetworkInterfaces[*].{ID:NetworkInterfaceId,Status:Status,IP:PrivateIpAddress,DNS:TagSet[?Key=='DNS'].Value}"
# Convert this to PS Object
$ip = $ip_raw | out-string | convertfrom-json
foreach($item in $ip){
# Proceed only if DNS Tag does NOT exist
  if($item.DNS.length -eq 0){
    $error.clear()
    $dns = nslookup $item.ip
    if($error.count -eq 0){
      $name_line = $dns | ?{$_ -like 'Name*'}[0]
      $dns_name = ($name_line.split(":")[1].trim()
    }else{
      $dns_name = "None"
    }
    aws ec2 create-tags --resources $item.ID --tags "Key=DNS,Value=$dns_name"
  }
}

Lambda Update EC2 Tags

Update EC2 Tags using Python (Lambda function)


Simple example demonstrating Python's ability to lookup and update Tags.
You can also find me here


import json
import boto3
from datetime import date

# I don't like the dictionary returned by AWS, so I convert it to
# Key:Value pairs
def parse_tags(tag_dict):
    my_dict = {}
    for tag in tag_dict:
        for item in tag:
            if item == 'Key':
                key = tag[item]
            else:
                value = tag[item]
        my_dict[key]= value
    return my_dict

def lambda_handler(event, context):
    today = date.today()
    year = today.strftime("%Y")
    month = today.strftime("%m")
    day = today.strftime("%d")
    #Declare object for all of our ec2 objects in this region
    ec2 = boto3.resource('ec2', region_name='us-east-2')
    #give me all the instances
    instances = ec2.instances.all()
    print('Instances')
    for ins in instances:
        print("Instance Id: ", ins.id)
        ins_tag_dict = {}
        if(ins.tags != None):
            ins_tag_dict = parse_tags(ins.tags)
        #get function of dictionary return None if not found
        name_tag = ins_tag_dict.get('Name')
        if(name_tag == None):
            #Create a name tag for this object
            name_tag = 'Sam'
            ins.create_tags(Tags=[{'Key':'Name','Value': name_tag}])
        #Give me all the volumes for this instance
        volumes = ins.volumes.all()
        for vol in volumes:
            vol_tag_dict = {}
            if(vol.tags != None):
                vol_tag_dict = parse_tags(vol.tags)
            print("Volume Id: ",vol.id)
            # attachment (LIST) has following values:
            ## [{'AttachTime': datetime.datetime(2018, 12, 3, 5, 11, 5, tzinfo=tzlocal()), 'Device': '/dev/xvda', 'InstanceId': 'i-XXXXXXXX', 'State': 'attached', 'VolumeId': 'vol-XXXXXXX', 'DeleteOnTermination': True}]
            # Convert this LIST to DICT
            vol_att_dict = vol.attachments[0]
            vol_device = vol_att_dict.get('Device')
            vol_name_tag = vol_tag_dict.get('Name')
            if(vol_name_tag == None):
                #Create a name tag for this object
                vol_name_tag = name_tag + '_' + vol_device
                vol.create_tags(Tags=[{'Key':'Name','Value': vol_name_tag}])
        #Give ENI names of the EC2 Instance if they are missing name tag
        net_interfaces = ins.network_interfaces
        for eni in net_interfaces:
            print("ENI Id: ",eni.id)
            eni_tag_dict = {}
            if(eni.tag_set != None):
                eni_tag_dict = parse_tags(eni.tag_set)
            eni_name_tag = eni_tag_dict.get('Name')
            if(eni_name_tag == None):
                eni.create_tags(Tags=[{'Key':'Name','Value':name_tag}])
    print('Volumes not in use')
    #give me all the volumes that are not in use
    volumes = ec2.volumes.filter(Filters=[{'Name': 'status', 'Values': ['available']}])
    for vol in volumes:
        print("Volume Id: ",vol.id)
        vol_tag_dict = {}
        if(vol.tags != None):
            vol_tag_dict = parse_tags(vol.tags)
        vol_mode_tag = vol_tag_dict.get('Mode')
        if(vol_mode_tag == None):
            #If Mode tag does not exist, make it Auto mode
            vol.create_tags(Tags=[{'Key':'Mode','Value':'Auto'}])
        vol_expire_tag = vol_tag_dict.get('Expire')
        if(vol_expire_tag == None):
            #If Expire tag does not exist then set it to 7 day from now
            ## we'd have a different function to do the actual cleanup
            new_day = str(int(day) + 7).zfill(2)
            expireDate = year + month + new_day
            vol.create_tags(Tags=[{'Key':'Expire','Value': expireDate}])
    return {
        'statusCode': 200,
        'body': json.dumps('Finished!')
    }

AWS WAF log4j query

How to query AWS WAF log for log4j attacks 1. Setup your Athena table using this instruction https://docs.aws.amazon.com/athena/latest/ug/wa...