Monday, December 30, 2019

Working with ENI Flowlogs

Working with ENI Flowlogs

Setting up Flowlogs

  1. Go to your network interface and create flow log (per this instruction)
  2. It'll take about 5 min before you see anything in your Log Stream
  3. Each entry will be in this format
    • Protocols:
      • 1: ICMP (source and dest ports will be 0)
      • 6: TCP
      • 17: UDP
    •  Start/End are in UNIX Seconds

Basic search (from Cloudwatch Logs)

  • Find all source IP of 10.0.0.1
    [a,b,c,d=10.0.0.1,e,f,g,h,i,j,k,l]
  • Find all source IP of 10.0.0.1 destined to port 8443
    [a,b,c,d=10.0.0.1,e,f,port=8443,h,i,j,k,l]

AWS ELB Primer

AWS ELB Primer

Creating classic ELB

1
2
3
4
5
6
7
8
9
aws elb create-load-balancer
    --load-balancer-name ELB_NAME
    --listener "Protocol=HTTP,
                LoadBalancerPort=80,
                InstanceProtocol=HTTP,
                InstancePort=80"
    --scheme internal
    --subnets subnet-xxxx subnet-yyyy
    --security-groups sg-123456

Tagging ELB

1
2
3
aws elb add-tags 
    --load-balancer-names ELB_NAME 
    --tags "Key='keyA',Value='valueA'" "Key='keyB',Value='valueB'"

Adding instance to ELB

1
2
3
aws elb register-instances-with-load-balancer
    --load-balancer-name MY_ELB
    --instances i-xxxxx


Removing instance from ELB


1
2
3
aws elb deregister-instances-with-load-balancer
    --load-balancer-name MY_ELB
    --instances i-xxxxx

Working with Certificates

View: aws iam get-server-certificate --server-certificate-name MY_CERT_NAME

Delete: aws iam delete-server-certificate --server-certificate-name MY_CERT_NAME

List: aws iam list-server-certificates

Upload

1
2
3
4
5
aws iam upload-server-certificate
    --server-certificate-name MY_CERT
    --certificate-body file://c:\temp\public.pem
    --private-key file://c:\temp\private.pem
    --certificate-chain file://e:\temp\chain.pem









AWS EC2 Reset Windows Password

AWS EC2 Reset Windows Password (Win 2008)


  1. Detach root volume from the inaccessible Windows (A) instance to another Windows instance (B) as a non-root volume. Be sure B is running identical version of Windows.
  2. Log into B
  3. Mount the secondary volume
  4. Browse to the secondary volume into \Program Files\Amazon\Ec2ConfigService\Settings\config.xml 
  5. Find the section for "Ec2SetPassword"
  6. Set the "State" property to "Enabled"
    <Ec2ConfigurationSettings>
      <Plugins>
        <Plugin>
          <Name>Ec2SetPassword</Name>
          <State>Enabled</State>
        </Plugin>
    
  7. Replace the file (accept the UAC warning)
  8. Update the disk signature
    1.  Open regedit.exe
    2. Under HKEY_LOCAL_MACHINE, find "Windows Boot Manager"
    3. This should look like "HKLM\BCD00000000\Objects\{XXXXX-XXX-XXXX-XXXX-XXXXXX}\Elements\"
    4. Go to sub-path "11000001"
    5. Select "Element" Value
    6. Find the byte value found at offset 0x38
    7. Reverse this set of bytes (6E E9 36 02)
    8. This is the disk signature that this disk needs to have
    9. Open Admin Command Prompt
    10. Run diskpart
    11. Select the disk of the drive from Windows instance A
      select disk 2
    12. View the disk signature of this drive
      uniqueid disk
    13. If this isn't what was found from step 7, then we need to make it so
      uniqueid disk id=6EE93602
    14. This will cause this volume to come offline
  9. From AWS, detach this volume from B and add it to A as /dev/sda1
  10. Proceed to retrieve the random password as usual

S3 Presigned URL

S3 Presigned URL


Temporary credential that can be generated and given to anyone to allow temporary access to a bucket or an object.

Permission granted can only be at the same level as the role used to generate the presigned URL

Presigned URL includes the following:
  • X-Amz-Algorithm
  • X-Amz-Expires
  • X-Amz-Date
  • X-Amz-SignedHeaders
  • X-Amz-Security-Token
  • X-Amz-Credential
  • X-Amz-Signature
Presigned URL is valid for either one of the below
  • 3600 seconds is none is defined
  • Seconds as defined by "--expires-in" flag
  • Expired time of the role used to generate the URL
CORS Configuration must be defined to allow external URL to gain access if the user is trying to retrieve the target object from another webpage

Example (via PowerShell)

1
2
3
4
5
6
$s3uri = "myBucket/mylogs/important.log"
$expireSec = 120
$output = aws s3 presign $s3uri --expires-in $expireSec
$objIE = new-object -ComObject InternetExplorer.Application
$objIE.Navigate($output)
$objIE.visible = $true

AWS S3 Bucket Policy examples

S3 bucket policy examples


Grant full permission to another account


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
{
"Version": "2012-10-17",
"Id": "PolicyForPrincipal",
"Statement": [
    {
           "Sid": "AccountAllow",
           "Effect": "Allow",
           "Principal": {
             "AWS": "arn:aws:iam::XXXXXX:root"
              },
           "Action": "s3:*",
           "Resource": [
              "arn:aws:s3:::myBucket",
              "arn:aws:s3:::myBucket/*"
              ]
    }
    ]
}
                       

Notes
  • The principal points to the root of the account, if you want to specify a user in that account, this must be delegated from IAM policy of that account
  • Resources must contain the bucket itself if you want to grant "ListObject" operation

Grant read/write from specific set of IP addresses


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
"Version": "2012-10-17",
"Id": "PolicyForIP",
"Statement": [
    {
           "Sid": "IPAllow",
           "Effect": "Allow",
           "Principal": "*",
           "Action": "s3:*",
           "Resource": "arn:aws:s3:::myBucket/*",
           "Condition": {
               "IpAddress": {
                   "aws:SourceIp": [
                          "10.10.10.10/32",
                          "168.0.0.10/32"
                   ]
               }
           }
    }
    ]
}
                       




Saturday, December 28, 2019

AWS CLI Examples

Miscellanous AWS CLI Example

Run (Launch) new instance


1
2
3
4
5
6
7
aws ec2 run-instances
      --image-id ami-xxxxxxxxxx
      --network-interface "NetworkInterface='eni=xxxxxx',DeviceIndex=0"
      --key-name MY_KEY
      --instance-type "m4.xlarge"
      --disable-api-termination
      --iam-instance-profile "Arn=arn:aws-iso:iam::1234567890:instance-profile/myiamprofile"

Lines
  1. Base command
  2. ID of AMI
  3. Network Interface ID and its placement (if known). You can opt instead to provide subnet-id if you want a new interface to be used 
  4. Key Name
  5. Instance Type
  6. Disable API termination (remove if you want to enable API termination)
  7. IAM instance profile (remove if you don't want to use IAM profile)
Other options and their defaults
  • Security Group: Default
  • Shutdown Behavior: Stop
  • EBS Optimized: False
  • Enhanced Monitoring: False

Create new AMI

1
2
3
4
aws ec2 create-image
        --instance-id i-xxxxxxxxxxxxxxxxxx
        --name MY_AMI_NAME_01
        --description "My Description"

Tagging Resource


aws ec2 create-tags --resources SOME_ID --tags "Key=MYKEY,Value='MYVALUE'..."
Notes:
  • Any EC2 resource ID can be used
  • tag must be a key, value pair separated by a comma
  • Multiple tags can be provided, they must be separated by a space

View all of my AMIs

aws ec2 describe-images --owners "self"

View all Instances of some account

aws ec2 describe-instances --filters "Name=owner-id,Values=XXXXXX"


View all snapshots of some account


aws ec2 describe-snapshots --owner-ids XXXXXXX

Remove Termination Protection

1
2
3
aws ec2 modify-instance-attribute
        --instance-id i-xxxxxxxxxxx
        --no-disable-api-termination

Terminate Instance

1
2
aws ec2 terminate-instances
        --instance-id i-xxxxxxxxxxx

Create Volume from Snapshot


1
2
3
4
5
aws ec2 create-volume
        --snapshot-id snap-xxxxxxxxxx
        --size 50
        --availability-zone us-east-1a
        --volume-type gp2


Copy single file to S3

aws s3 cp filename.log s3://bucketname


Copy directory to S3

aws s3 cp \\path\directory\ s3://bucketname/prefix --recursive

Note
  • Case sensitive
  • Empty sub-directories will be ignored

Copy with filter (only copy .log files from all path and sub-path)


aws s3 cp \\path\directory\ s3://bucketname/prefix/ --exclude '*' --include '*.log' --recursive

Note
  • Exclude everything but .log extensions
  • Order of operation is important

Copy from bucket to bucket

aws s3 cp s3://bucketA s3://bucketB --recursive

Sync local content to bucket

aws s3 sync \\localpath\ s3://bucketA/path/ --exclude '*' --include '*.log' --delete

Note
  • Delete flag ensures what is deleted at source is also deleted at destination
  • Recurse flag is always assumed

Sync local to bucket except a directory


aws s3 sync s3://bucketA/path/ \\localpath\test\ --exclude 'Special/*' 


S3 Permissions

Use "private" default - only "me" is granted permission

aws s3 cp filename.txt s3:/bucketA/path/

Also allow publc read

aws s3 cp filename.txt s3:/bucketA/path/ --acl public-read

Also allow public read/write

aws s3 cp filename.txt s3:/bucketA/path/ --acl public-read-write

Give owner of bucket full control too

aws s3 cp filename.txt s3:/bucketA/path/ --acl bucket-owner-full-control

Upload Bucket Policy

aws s3api put-bucket-policy --bucket myBucket --policy file://myPolicy.json
Notes
  • myPolicy.json file is expected in the current directory

Various other EC2 describes

  • Customer Gateways: aws ec2 describe-customer-gateways
  • Network ACL: aws ec2 describe-network-acls
  • ENIs: aws ec2 describe-network-interfaces
  • Route Table: aws ec2 describe-route-tables
  • Security Group: aws ec2 describe-security-groups
  • Key Pairs: aws ec2 describe-key-pairs
  • Subnets: aws ec2 describe-subnets
  • VPN GWs: aws ec2 describe-vpn-gateways
  • VPCs: aws ec2 describe-vpcs
  • Peering Connections: aws ec2 describe-vpc-peering-connections
  • VPN Connections: aws ec2 describe-vpn-connections









Thursday, December 5, 2019

Patching PeopleTool 8.57

How to Patch Weblogic in PeopleTool 8.57

Per Oracle, if you are running PeopleTool 8.57 on Windows, it is recommended that you deploy the latest DPK that comes with all the necessary patches. However, if you just want to patch Weblogic (because...), follow this.

  1. Stop all PIA services and Oracle service
  2. Ensure you have 7-zip installed (because Windows can't natively handle such long path names that Oracle provides)
  3. Download the latest Weblogic patch (the latest one contains all the previous patches too)
  4. Use 7-zip to extract the content
  5. Go into the content directory and run weblogic's opatch.bat with apply and "-oh" flag to designate Oracle Home directory for this installation. I'll assume Oracle DPK was installed in c:\pt857
    c:\pt857\pt\bea\OPatch\opatch.bat apply -oh c:\pt857\pt\bea -silent
    
  6. Call same opatch.bat with lsinventory to ensure the patch took
    c:\pt857\pt\bea\OPatch\opatch.bat lsinventory -oh c:\pt857\pt\bea
    
  7. Here's the whole thing in a bat file
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    call net stop "ORACLE ProcMGR v12.2.2.0.0_VS2015"
    call net stop "psDEMO-WebLogicAdmin"
    call net stop "psDEMO-PIA"
    aws s3 cp s3://my-repo-name/7zip/7z920.exe c:\temp
    aws s3 cp s3://my-repo-name/patches/p30386660_122130_Generic.zip c:\temp
    cd c:\temp
    call 7z920.exe /S /D="C:\Program Files (x86)\7-Zip"
    call "C:\Program Files (x86)\7-Zip\7z.exe" x C:\temp\p30386660_122130_Generic.zip
    call c:\pt857\pt\bea\OPatch\opatch.bat apply -oh c:\pt857\pt\bea -silent
    call "c:\Program Files (x86)\7-Zip\Uninstall.exe" /S
    cd c:\pt857\pt\bea\opatch
    call c:\pt857\pt\bea\opatch\opatch.bat lsinventory -oh c:\pt857\pt\bea
    

Saturday, April 13, 2019

LAMP for beginner


Linux, Apache, MySQL, PHP (LAMP) for beginners...

Enough to get you started... (this was done in AWS).

  1. Launch a new EC2 instance from AWS' Red Hat 7 AMI (free tier)
  2. Log into it using "ec2-user"
  3. Elevate privilege
    sudo su - 
  4. Do a fresh update
    yum update
  5. Add repositories
    1. REMI:
      rpm -Uvh http://rpms.famillecollet.com/enterprise/remi-release-7.rpm
    2. EPEL
      rpm -Uvh http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm
    3. MySQL
      rpm -Uvh https://repo.mysql.com/mysql80-community-release-el7.rpm
  6. Installing Apache
    1. Yum install command
      yum install httpd
    2. Set it to start on boot:
      systemctl enable httpd.service 
    3. Start it now:
      systemctl start httpd.service
    4. Default log location (access_log and error_log)
      /var/log/httpd
    5. Default configuration
      /etc/httpd/conf/httpd.conf
  7. Installing mysql
    1. Yum install command
      yum install mysql-server
    2. Set this to start on boot:
      systemctl enable mysqld.service
    3. Start it now
      systemctl start mysqld.service
    4. Get the mysql temporary password for root
      grep "A temporary password" /var/log/mysqld.log | tail -nl
    5. You should get something like this...
       [Server] A temporary password is generated for root@localhost: V!cedo*iP0iW
    6. Secure your mysql
      mysql_secure_installation
    7. Accept all secure measures. Don't forget your new password!
  8. Install php
    1. Do it from REMI repo (RedHat only comes with php 5.3)
      yum --enablerepo=epel,remi-php73 install php
    2. Install Modules of your choice (these are what I installed)
      yum --enablerepo=remi-php73 install php-mysql php-xml php-xmlrpc php-soap php-gd php-fpm
    3. Restart Apache for the php install to take effect
      systemctl restart httpd.service
  9. You should be able to browse to it now
    1. you can do it locally:
      curl locahost
    2. You can do it remotely
      http://publicIP
    3. If you don't see it from remote, try turning off IPTABLES
      systemctl status iptables
  10. Write your first HTML page
    1. Go to /var/www/html/
    2. Edit a new file
      vi index.html
    3. Paste the following
      <html>
      <head>
      </head>
      <body>
      <p>Hello World</p>
      </body>
      </html>
      
  11. Write your first PHP page
    1. Still at /var/www/html/
    2. Edit a new file
      vi index.php
    3. Anything inside <?php and ?>  will be interpreted as PHP code
    4. Paste the following
      <html>
      <head>
      </head>
      <body>
      <?php
      print("Hello World");
      phpinfo();
      ?>
      </body>
      </html>
      
  12.  Let's secure your site
    1. Install mod_ssl module
      yum install mod_ssl
    2. Get self-signed cert via this command(or look up how to purchase one or get a free one from cacert.org)
      openssl req -newkey rsa:2048 -nodes -keyout /etc/ssl/private/myserver.key -x509 -days 365 -out /etc/ssl/private/myserver.crt
    3. Go to /etc/httpd/conf.d
    4. Create a new file ssl.conf and paste the following into it
      LoadModule ssl_module modules/mod_ssl.so
      
      Listen 443
      <VirtualHost *:443>
          ServerName myserver
          SSLEngine on
          SSLCertificateFile "/etc/ssl/private/myserver.crt"
          SSLCertificateKeyFile "/etc/ssl/private/myserver.key"
      </VirtualHost>
      
    5. Restart httpd
      systemctl restart httpd.service
    6. Check log if you run into issue, you may have a syntax error
    7. Go to your browser and use https instead
  13. Bonus Round! Let's accept client certs. Go here for more info.
    1. Update the previous step's ssl.conf with this new
      LoadModule ssl_module modules/mod_ssl.so
      
      Listen 443
      <VirtualHost *:443>
          ServerName myserver
          SSLEngine on
          SSLCertificateFile "/etc/ssl/private/myserver.crt"
          SSLCertificateKeyFile "/etc/ssl/private/myserver.key"
          SSLCACertificateFile "/etc/ssl/private/myserver.crt" 
          ## Your choice here is required, optional, and optional_no_ca
          SSLVerifyClient optional_no_ca
          ## this is number of depths of CA to traverse, use 1
          SSLVerifyDepth 1
          ## this send the cert info to PHP
          SSLOptions +StdEnvVars +ExportCertData
          ## this will allow it to accept your own CA unknown to browser
          SSLCADNRequestPath /etc/ssl/private/ 
      </VirtualHost>
      
    2. Restart httpd so that the new ssl.conf is accepted
    3. At this point, you need a cert loaded to your browser to test this
      1. You can use your company cert
      2. You can also use self-signed cert and loaded onto your browser (use openssl to self-sign a key pair then use "openssl pkcs12" command to put it together as PKCS12 so that you can import it into your browser)
    4. Now browse to your sample PHP page you created before and you should get prompted for your cert, give it and let's take a look at the Apache Environment variables
    5.   Creating a table to hold visitor credential. Here's additional how-to mysql.
      1. Log into mysql
        mysql -u root -p
      2. Create a new database
        mysql> CREATE DATABASE mylamp;
      3. Select this database for use
        mysql> USE mylamp;
      4. Let's create our table
        CREATE TABLE users (
            id INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
            username VARCHAR(50) NOT NULL UNIQUE,
            created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
            lastvisit_at DATETIME DEFAULT CURRENT_TIMESTAMP 
        );
        
      5. Let's check it
        mysql> SHOW TABLES;
      6. You can also do this:
        mysql> DESCRIBE users;
      7. Now let's create a new user to access this table (I'm disabling password validation for this user). More info here.
        SET GLOBAL validate_password.policy=LOW;
        CREATE USER 'mylampuser'@'localhost'
          IDENTIFIED WITH mysql_native_password BY 'password';
        GRANT ALL
          ON mylamp.*
          TO 'mylampuser'@'localhost'
          WITH GRANT OPTION;
        
      8. You can exit now
        mysql> exit
    6. Let's create some PHP files to write user information to the table. Here's additional info on how to do this.
      1. Create a new file, config.php (at /var/www/html)
        vi config.php
      2. Paste the following
        <?php
        /* Database credentials. */
        define('DB_SERVER', 'localhost');
        define('DB_USERNAME', 'mylampuser');
        define('DB_PASSWORD', 'password');
        define('DB_NAME', 'mylamp');
         
        /* Attempt to connect to MySQL database */
        $link = mysqli_connect(DB_SERVER, DB_USERNAME, DB_PASSWORD, DB_NAME);
         
        // Check connection
        if($link === false){
            die("ERROR: Could not connect. " . mysqli_connect_error());
        }
        ?>
        
      3. Create a new file, login.php (also at /var/www/html)
        vi login.php
      4. Paste the following
        <?php
        // Include config file
        session_start();
        require_once "config.php";
        if(isset($_SERVER['SSL_CLIENT_S_DN_Email'])){
         $_SESSION['username'] = $_SERVER['SSL_CLIENT_S_DN_Email'];
         $username = $_SERVER['SSL_CLIENT_S_DN_Email'];
         $sql = "SELECT id FROM users WHERE username = ?";
         if($stmt = mysqli_prepare($link, $sql)){
          // Bind variables to the prepared statement as parameters
          mysqli_stmt_bind_param($stmt, "s", $username);
          // Attempt to execute the prepared statement
          if(mysqli_stmt_execute($stmt)){
           /* store result */
           mysqli_stmt_store_result($stmt);
        
           if(mysqli_stmt_num_rows($stmt) == 1){
            //update user
            $sql = "UPDATE users SET lastvisit_at = now() where username = ?";
           } else{
            //add a new user
            $sql = "Insert into users (username) values (?)";
           }
           if($stmt = mysqli_prepare($link, $sql)){
            mysqli_stmt_bind_param($stmt, "s", $username);
            if(mysqli_stmt_execute($stmt)){
             mysqli_stmt_store_result($stmt);
             mysqli_stmt_free_result($stmt);
             mysqli_stmt_close($stmt);
            }
           }else{
            echo "Oops! Something went wrong. Please try again later.";
           }
          } else{
           echo "Oops! Something went wrong. Please try again later.";
          }
        }
        }else{
         $_SESSION['username'] = 'unknown';
        }
        echo $_SESSION['username'];
        mysqli_close($link);
        ?>
        

      5. Also create users.php to view all the entries in the table
        vi users.php
      6. Paste the following (more info here)
        <?php
        require_once "config.php";
        
        $sql = "SELECT * from users";
        if($stmt = mysqli_prepare($link, $sql)){
                if(mysqli_stmt_execute($stmt)){
                        mysqli_stmt_store_result($stmt);
                        printf("Number of rows: %d.<br>", mysqli_stmt_num_rows($stmt));
                        mysqli_stmt_bind_result($stmt, $id, $username, $created, $lastuse);
                        while(mysqli_stmt_fetch($stmt)){
                                printf("%s -- %s -- %s -- %s. <br>",$id,$username,$created,$lastuse);
                        }
                        mysqli_stmt_free_result($stmt);
                        mysqli_stmt_close($stmt);
                }
        }else{
                echo "Oops! Something went wrong. Please try again later.";
        }
        /* close connection */
        mysqli_close($link);
        ?>
        
      7. Browse to users.php, you'll see 0 entries
      8. In a new tab, open login.php, you should get success
      9. Now, go back to users.php tab, and refresh the page, you should see 1 entry with your information.
      10. For extra flexibility, you can include this in your other page to process the login action (index.php)
        <html>
        <head>
        </head>
        <body>
        <?php
        include 'login.php';
        print("Hello World");
        ?>
        </body>
        </html>
        























    Thursday, March 21, 2019

    Lambda write & read DynamoDB

    Writing and Reading from DynamoDB using Lambda Python

    ...using API Gateway as trigger.

    Below is a simple example of writing to and reading from a DynamoDB table. This code would be triggered by a webpage that sends a JSON package via configured API Gateway. And it would return a JSON package back to the website. Be sure the Role has the necessary permission to read and write to DynamoDB table being used.
    import json
    import decimal
    import boto3
    from boto3.dynamodb.conditions import Key, Attr
    
    # Helper class to convert a DynamoDB item to JSON.
    class DecimalEncoder(json.JSONEncoder):
        def default(self, o):
            if isinstance(o, decimal.Decimal):
                if o % 1 > 0:
                    return float(o)
                else:
                    return int(o)
            return super(DecimalEncoder, self).default(o)
    
    def lambda_handler(event, context):
        # TODO implement
        #define DynamoDB object
        dynamodb = boto3.resource('dynamodb')
        #define which table we want to use
        table = dynamodb.Table("yourOwnTable")
        #print the status of the table
        print(table.table_status)
        
        #Get the body string from the event input json
        #we convert this to json so we can use it easier
        body = json.loads(event['body'])
        #we enter this into the DB
        respose1 = table.put_item(
            Item={
                'RequestTime': body['myFormVariables']['RequestTime'],
                'User': body['myFormVariables']['User'],
                'entryID': body['myFormVariables']['entryID'],
                'Field1': body['myFormVariables']['Field2'],
                'Field2': body['myFormVariables']['Field1']
            }
        )
    
        print("Table Output")
        ##get it all
        response2 = table.scan()
        ##Get only the objects that match the given key
        #response = table.query(KeyConditionExpression=Key('entryID').eq("L3dVGMDS7yqzdoxCxOm4fg"))
        
        #Print all values in the response
        for i in response2['Items']:
            print(i['entryID'], ":", i['Field1'], ":", i['Field2'], ":", i['RequestTime'])
        
        #Just to illustrate how I handled cognito user information that was passed in
        cognito = event['requestContext']['authorizer']['claims']['cognito:username']
        
        return {
            'statusCode': 200,
            'body': json.dumps(response2, cls=DecimalEncoder)
        }
    

    Test Sample

    Here is a test event you can use to test your code. 

    {
      "path": "/entry",
      "httpMethod": "POST",
      "headers": {
        "Accept": "*/*",
        "Authorization": "eyJraWQiOiLTzRVMWZs",
        "content-type": "application/json; charset=UTF-8"
      },
      "queryStringParameters": null,
      "pathParameters": null,
      "requestContext": {
        "authorizer": {
          "claims": {
            "cognito:username": "the_username"
          }
        }
      },
      "body": "{\"myFormVariables\":{
      \"RequestTime\":"2019-01-11T05:56:22.517Z\",
      \"User\":\"the_username\",
      \"entryID\":\"L3dVGMDS7yqzdoxCxOm4f1\",
      \"Field1\":47,
      \"Field2\":122}}"
    }
    

    Output


    Sunday, March 10, 2019

    Lambda read S3 object with S3 trigger

    Configuring your Lambda function to read S3 object with S3 upload trigger


    Are you new to Lambda and haven't quite got the test case figured out for S3 upload trigger? In the steps below, you'll build a new Python Lambda function from a blueprint and use the CloudWatch log to populate a new test case. This way you don't have to upload just to test your new function.
    1. Create a new Lambda Function
    2. Select "Use a blueprint"
    3. Search for "S3"
    4. Select "s3-get-object-python3"
    5.  Configure it using mostly default settings
      1. Function Name: of your choice
      2. Create a new role from AWS policy template
      3. Role Name: of your choice
      4. Select your own trigger bucket
      5. Select all object create event
      6. Leave prefix and Suffix blank
      7. Enable trigger
      8. Select Create Function
    6.  Uncomment line 12 that begins print('Received event: '
    7.  Upload a file to the bucket you select in the previous step
    8.  From the Lambda function console, go to Monitoring
    9. Go to View logs in CloudWatch - this will open a new tab into CloudWatch
    10. Click the latest and probably ONLY log stream
    11. View in TEXT (not rows)
    12. Copy the text like below as shown in your log screen into your clipboard
    13. Now go back to your Lambda function console
    14. Up by the top between Action and Test, click on the drop down and select Configure Test Event
    15. Create new test event (it does not matter what template you choose)
    16. Give it a name
    17. In the body, paste the string from previous step that you put in your clipboard (overwrite existing text)
    18. Save and then click Test
    19. Your Lambda function will behave as if you've just uploaded that same file again
    20. Now go edit the function to do what you want and test easily!

    AWS WAF log4j query

    How to query AWS WAF log for log4j attacks 1. Setup your Athena table using this instruction https://docs.aws.amazon.com/athena/latest/ug/wa...