Monday, December 31, 2018

PowerShell Move Eventlogs to S3

Moving Eventlogs to S3

This can also be done to any remote location. And in the code below, I export as CSV, but you can also move as CAB files, but I prefer to be able to natively read these files without extracting them first. On most servers, this is a scheduled task that is set to run every hour because how fast our Security logs fill up.

Here is a sample of "get-eventlog -list" output


  Max(K) Retain OverflowAction        Entries Log                                                                                                                        
  ------ ------ --------------        ------- ---                                                                                                                        
  20,480      0 OverwriteAsNeeded          25 Application                                                                                                                
  20,480      0 OverwriteAsNeeded           0 HardwareEvents                                                                                                             
     512      7 OverwriteOlder              0 Internet Explorer                                                                                                          
  20,480      0 OverwriteAsNeeded           0 Key Management Service                                                                                                     
  20,480      0 OverwriteAsNeeded     225,262 Security                                                                                                                   
  20,480      0 OverwriteAsNeeded          45 System                                                                                                                     
  15,360      0 OverwriteAsNeeded         450 Windows PowerShell 

Actual Code to backup Eventlogs


#####################
#
# Export all eventlogs as CSV
# Clears logs after export
#
#####################
#This command will gather all the available Logs (not the actual logs themselves)
$evLogs = get-eventlog -list
$currentTime = $(get-date)
$thisDate = $currentTime.GetDateTimeFormats()[5]
#get date/time information and pad the numbers
$year = ($currentTime.Year | out-string).trim().padleft(4,"0")
$month = ($currentTime.Month | out-string).trim().padleft(2,"0")
$day = ($currentTime.Day | out-string).trim().padleft(2,"0")
$hour = ($currentTime.hour | out-string).trim().padleft(2,"0")
$min = ($currentTime.Minute | out-string).trim().padleft(2,"0")

try{
    if((test-path "e:") -eq $false){
        $rootDir = "c:\temp"
    }else{
        $rootDir = "e:\temp"
    }

    if((test-path $rootDir) -eq $false){
        mkdir $rootDir -Force
    }
    ##This is my target bucket
    $targetBucket = "s3://my-server-logs" 
    $targetPrefix = $targetBucket + "/" + $env:COMPUTERNAME + "/" + $year + "/" + $month + "/" + $day

    foreach($log in $evLogs){
        if($log.entries.count -gt 0){
            $filename = $thisdate + "-" + $hour + $min + "-" + $log.log + ".csv"
            $sourcefilename = $rootDir + "\" + $filename
            $targetfilename = $targetPrefix + "/" + $filename
            $events = get-eventlog -log $log.Log
            $events | Export-Clixml $sourcefilename
            if(Test-Path $sourcefilename){
                Clear-EventLog -logname $log.Log    
                aws s3 mv $sourcefilename $targetfilename
            }
        }
    }
}catch{
    ##Put error catching here...
}

PowerShell Cleanup C Drive

Cleaning C Drive

I do this on a Windows 2012R2 server.

Move the archived eventlogs to S3 bucket

#Archives are located here by default
$archiveFiles = Get-ChildItem -path C:\windows\system32\winevt -include "Archive*" -Recurse
#This is the S3 bucket where we'll keep these logs
$targetBucket = "s3://my-server-logs"
#We are going to organize these logs files under the computer name and date stamp
$targetPrefix = $targetBucket + "/" + $env:COMPUTERNAME
foreach($item in $archiveFiles){
    $splitoutput = $item.name.split("-")
    $year = $splitoutput[2]
    $month = $splitoutput[3]
    $day = $splitoutput[4]
    if(($year -match "\d{4}") -and ($month -match "\d{2}") -and ($day -match "\d{2}")){
        $targetFile = $targetPrefix + "/" + $year + "/" + $month + "/" + $day + "/" + $item.name
        aws s3 mv $item.fullname $targetFile
    }
}


Cleanup applied service packs and updates

This will prevent rollback ability so only do this if you've verified patches didn't break anything.

##Remove superseded and unused system files
dism.exe /online /Cleanup-Images /StartComponentCleanup
##All existing service packs and updates cannot be uninstalled after this update
dism.exe /online /Cleanup-Images /StartComponentCleanup /ReserBase
##Service packs cannot be uninstalled after this command
dism.exe /online /Cleanup-Images /SPSuperseded


Relocate Software Distribution Directory



##Stop Windows Update Service
net stop wuauserv
##Rename current software distribution directory
rename-item C:\windows\SoftwareDistribution SoftwareDistribution.old
##Create a new location for this distribution directory
mkdir E:\Windows-SoftwareDistribution
##Make a link 
cmd /c mklink /J C:\Windows\SoftwareDistribution "E:\Windows-SoftwareDistribution"
##Start service
net start wuauserv
rmdir C:\windows\SoftwareDistribution.old -confirm

PowerShell Folder Size Checker

PowerShell folder size checker. 

I use this on Windows 2012R2. A small script to analyze the size of each sub-directory. Output is in a text file, recommend using Excel to analyze the output.

#######################################
#
# Get tree like drive size output
#
# Modify the last line to reflect the 
#  folder starting point and the depth
#  of sub-folders to output
#
# Your mileage will vary depends on 
#  the permissions you have to 
#  sub-directories.
#  But you can do the math afterward.
########################################
# Create a new blank output file
$log = "e:\temp\foldersize.txt"
out-file -FilePath $log -InputObject ""

# This is the main function and within it a recursive calls
function get-folderSize(){
    param(
        $path = "e:\temp",
        $depth = 3
    )
    $log = "e:\temp\foldersize.txt"
    $output = ""
    $output = Get-ChildItem -path $path -ErrorAction SilentlyContinue
    $totalSize = 0
    #look through every item individually
    foreach($item in $output){
        #If it is a folder, then call a nested function of itself
        if($item.psIsContainer -eq $true){
            $totalSize = $totalSize + (Get-folderSize -Path $item.fullname -depth ($depth-1))
        }else{
            $totalSize = $totalSize + $item.length
        }
    }
    #As long as depth is greater than 0 then present the files in that depth in the output
    # Use the delimiter of double colon when parsing output
    if($depth -ge 0){
        $inputObject = $path + "::" + [math]::round($totalSize/1MB,2) + "::" + $depth
        out-file -FilePath $log -InputObject $inputObject -Append
    }
    return $totalSize
}
get-folderSize -path C:\Windows -depth 1


Sample output:

... 
C:\Windows\System::0.03::0
C:\Windows\System32::2885.63::0
C:\Windows\SystemResources::3.23::0
C:\Windows\SysWOW64::992.38::0
C:\Windows\TAPI::0::0
C:\Windows\Tasks::0::0
C:\Windows\Temp::1525.79::0
C:\Windows\ToastData::0.01::0
C:\Windows\tracing::0::0
C:\Windows\Vss::0.01::0
C:\Windows\Web::2.68::0
C:\Windows\WinSxS::7392.18::0
C:\Windows::15796.69::1

Friday, December 14, 2018

re:Invent hints

Things to know before attending re:Invent

I was able to attend re:Invent 2018, but unfortunately I was ill prepared for the event. This is a note to self.

  1. Get the Mobile App here
    1. There are games here (for prizes)
    2. You can register for sessions here
    3. You can get shuttle schedules here
  2. Register early for sessions
  3. Ensure official aws emails aren't being sent to junk folder in your email account because you will get daily newsletters during re:invent event, these are useful
  4. Some hands-on sessions will be paid sessions but they will all fill up quick. These are the most useful sessions at re:invent.
  5. You can get stuff at sessions such as t-shirts, amazon credits, or whatever new gadget they will release. Clues for these will be in the daily emails. 
  6. Hands-on sessions will use your registered email to log into labs, don't forget this
  7. If you want SWAGs, go early if you want your size
  8. If you are aws certified, go to certification lounge (more SWAGs)
  9. There are multiple vendor expos (there was one in Aria and one in Venetian), they have different activities in both locations
  10. You don't have to go watch the keynote speakers at Venetian, you can watch it on a big screen at any other sponsored hotels. I saw no advantage in attending in person
  11. The best free meal (breakfast and lunch) locations are as follows (IMHO): Aria > Venetian > MGM
  12. Get to Las Vegas after 3PM, you could leave your luggage at the hotel until your room is ready, but by arriving after 3PM your room will be ready and you may as well get settled before beginning on all the walking you'll be doing
  13. Bring a "light-weight" laptop. 17 inch is nice, but it'll get heavy real quick
  14. And bring a padded backpack while you are at it
  15. Wear comfortable shoes
  16. Most people are wearing button shirt or t-shirt and jeans. 
  17. All slides and recordings of every sessions will be available a week after the event.


Thursday, December 13, 2018

EBS Auto Snapshot Script in Bash

How to automatically maintain EBS snapshots in AWS. 

This solution was developed before AWS released EBS Lifecycles feature and this solution relies on an EC2 instance. As it's fairly light-weight it can run on a free tier instance type. I also have a Python version of this script that run as an Lambda function which I will post in another time.

In our environment, the lowest frequency at which a snapshot was required was every 30 minutes and at the highest frequency, weekly. So the frequency of required snaps looks like this:
  • M: Half-Hourly, bottom of the hour
  • H: Hourly: top of the hour
  • D: Daily: At 0100
  • W: Weekly: At 0100 on Sunday morning
So using the 4 period above, we create a new tag called SnapRate to be applied to any volumes that we want to mange using this method. 
  • M/H/D/W, for example 12/24/7/6
Where we provide the total number of snapshots we want to maintain PER that period. In the above example, we would have 12 half-hourly (for past 6 hours), 24 daily (for past 24 hours), 7 daily (for past week), and 6 weekly (for past 6 weeks). You can picture the snapshots like this:
  • MMMMMM H H H D        D         D                W                      W                                W 

Support Function: 

This is get_cycle.bsh, this returns the array index for the snapshot tag defined above.

#!/bin/bash
DOM=$(TZ='America/New_York' date +%-d)
DOW=$(TZ='America/New_York' date +%u)
HOUR=$(TZ='Amreica/New_York' date +%-H)
MINU=$(TZ='America/New_York' date +%-M)
if [ $DOW == 1 ] && [ $HOUR == 1 ] && [ $MINU -lt 30 ]
then
  CYCLE="3"
elif [ $HOUR == 1 ] && [ $MINU -lt 30 ]
then
  CYCLE="2"
elif [ $MINU -lt 30 ]
then
  CYCLE="1"
else
  CYCLE="0"
fi
echo $CYCLE

Main Functions:

Use this, aws_create_snapshots.bsh in your crontab to automatically create snapshots, set the crontab to run every 30 mins.

#!/bin/bash
###########################################
#
# aws_create_snapshots.bsh
# Description: Ability to create snapshots in the account for all volumes that fits tag criteria
#
# Last edit: 11/21/2018
#
# Prereq:
#  aws cli
#  jq
###########################################
source /etc/profile
# In my case aws cli was installed under /usr/local/bin
PATH=$PATH:/usr/local/bin
echo "Path is set to $PATH"
echo "AWS_CA_BUNDLE is at $AWS_CA_BUNDLE"
LOG_DATE=`TZ='America/New_York' date +%Y%m%d`
LOGFILE="${BASH_SOURCE%/*}/logs/aws_create_snapshots_$LOG_DATE.log"
echo "Logs are written to $LOGFILE"
FORMAT_DATE=`TZ='America/New_York' date +%Y%m%d.%H%M%S`
echo "=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=" >> $LOGFILE
echo "Script Start" >> $LOGFILE
echo "Current Time: $FORMAT_DATE" >> $LOGFILE
CYCLE=`${BASH_SOURCE%/*}/get_cycle.bsh`
CYCLE_WORD_ARRAY=("Half-Hourly" "Hourly" "Daily" "Weekly")
CYCLE_LETT_ARRAY=("M" "H" "D" "W")
CYCLE_TIME_ARRAY=(30 60 1440 10080)
echo "Current Cycle: ${CYCLE_WORD_ARRAY[CYCLE]}" >> $LOGFILE
EC2_AZ=`curl http://169.254.169.254/latest/meta-data/placement/availability-zone`
REGIONID="`echo \"$EC2_AZ\" | sed 's/[a-z]$//'`"
VOLUMES=`aws ec2 describe-volumes \
--filter "Name=tag-key,Values='SnapRate'" \
         "Name=tag-key,Values='Name'" \
         "Name=tag-key,Values='Application'" \
         "Name=tag:Mode,Values='Auto'" \
         "Name=tag:Keep,Values='Yes'" \
         "Name='attachment.status',Values='attached'" \
--query "Volumes[*].{VolumeID:VolumeId, \
                         Name:Tags[?Key==\\\`Name\\\`].Value, \
                     Function:Tags[?Key==\\\`Function\\\`].Value, \
                  Application:Tags[?Key==\\\`Application\\\`].Value, \
                     SnapRate:Tags[?Key==\\\`SnapRate\\\`].Value}" \
--region $REGIONID`
echo "=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=" >> $LOGFILE
for VOL in $(echo "${VOLUMES}" | jq -c '.[]'); do
    VOLID=`echo ${VOL} | jq -r '.VolumeID'`
    echo "Volume ID: $VOLID" >> $LOGFILE
    APP=`echo ${VOL} | jq '.Application' | jq -r .[0]`
    echo "Application: $APP" >> $LOGFILE
    FUNCTION=`echo ${VOL} | jq '.Function' | jq -r .[0]`
    echo "Function: $FUNCTION" >> $LOGFILE
    NAME_PRE=`echo ${VOL} | jq '.Name' | jq -r .[0]`
    SEP="_"
    NAME=$NAME_PRE$SEP$FORMAT_DATE
    echo "New Name: $NAME" >> $LOGFILE
    SNAPRATE=`echo ${VOL} | jq '.SnapRate' | jq -r .[0]`
    echo "SnapRate: $SNAPRATE" >> $LOGFILE
    THIS_CYCLE=$CYCLE
    NEW_CYCLE_VALUE=0
    IFS='/' read -r -a SNAPARRAY <<< "$SNAPRATE"
    NEW_CYCLE_VALUE=${SNAPARRAY[THIS_CYCLE]}
    while [ $NEW_CYCLE_VALUE -eq 0 ] && [ $THIS_CYCLE -gt 0 ]; do
        let THIS_CYCLE-=1
        NEW_CYCLE_VALUE=${SNAPARRAY[THIS_CYCLE]}
    done
    if [ $NEW_CYCLE_VALUE -gt 0 ]
    then
        THIS_CYCLE_LETT=${CYCLE_LETT_ARRAY[THIS_CYCLE]}
        TIME_SEED=${CYCLE_TIME_ARRAY[THIS_CYCLE]}
        THIS_CYCLE_TIME=`echo "$((TIME_SEED * NEW_CYCLE_VALUE))"`
        EXPIREDATE=`date -d "+$THIS_CYCLE_TIME minutes" "+%Y%m%d.%H%M%S"`
        echo "VOL CYCLE: $THIS_CYCLE_LETT" >> $LOGFILE
        echo "EXPIRE DATE: $EXPIREDATE" >> $LOGFILE
        OUTPUT=`aws ec2 create-snapshot --volume-id $VOLID --description $NAME --region $REGIONID`
        echo $OUTPUT >> $LOGFILE
        SNAPID=`echo ${OUTPUT} | jq '.SnapshotId' -r`
        if [ $SNAPID != "null" ] && [ -n $SNAPID ]
        then
            # Tags in JSON
            TAGS=`echo '[{"Key":"Name","Value":"'$NAME'"},\
{"Key":"Cycle","Value":"'$THIS_CYCLE_LETT'"},\
{"Key":"Keep","Value":"Yes"},\
{"Key":"Source","Value":"'$VOLID'"},\
{"Key":"Application","Value":"'$APPLICATION'"},\
{"Key":"Function","Value":"'$FUNCTION'"},\
{"Key":"Mode","Value":"Auto"},\
{"Key":"ExpirationDate","Value":"'$EXPIREDATE'"}]'`
            echo "New SnapID: $SNAPID" >> $LOGFILE
            OUTPUT2=`aws ec2 create-tags --resources $SNAPID --tags $TAGS --region $REGIONID`
            echo $OUTPUT2 >> $LOGFILE
        fi
    else
        echo "No snapshot requested in SnapRate" >> $LOGFILE
    fi

    echo "---------" >> $LOGFILE
done

Use this, aws_cleanup_snapshots.bsh in your crontab to automatically delete snapshots that are beyond the requested SnapRate value. Also, set this to run every 30 minutes. 

#!/bin/bash
################################################################
#
# aws_cleanup_snapshots.bsh
# Description: Clean up snapshots
#
# Last edit: 12/7/2018
#
# Prereq:
#  aws cli
#  jq
################################################################
source /etc/profile
PATH=$PATH:/usr/local/bin
echo "Path is set to $PATH"
echo "AWS_CA_BUNDLE is at $AWS_CA_BUNDLE"
LOG_DATE=`TZ='America/New_York' date +%Y%m%d`
LOGFILE="${BASH_SOURCE%/*}/logs/aws_cleanup_snapshots_$LOG_DATE.log"
echo "Logs are written to $LOGFILE"
FORMAT_DATE=`TZ='America/New_York' date +%Y%m%d.%H%M%S`
echo "=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=" >> $LOGFILE
echo "Script Start" >> $LOGFILE
echo "Current Time: $FORMAT_DATE" >> $LOGFILE
CYCLE=`${BASH_SOURCE%/*}/get_cycle.bsh`
CYCLE_WORD_ARRAY=("Half-Hourly" "Hourly" "Daily" "Weekly")
CYCLE_LETT_ARRAY=("M" "H" "D" "W")
CYCLE_TIME_ARRAY=(30 60 1440 10080)
CURRENT_CYCLE=${CYCLE_LETT_ARRAY[CYCLE]}
echo "Current Cycle: ${CYCLE_WORD_ARRAY[CYCLE]}" >> $LOGFILE
EC2_AZ=`curl http://169.254.169.254/latest/meta-data/placement/availability-zone`
REGIONID="`echo \"$EC2_AZ\" | sed 's/[a-z]$//'`"
VOLUMES=`aws ec2 describe-volumes \
--filter "Name=tag-key,Values='SnapRate'" \
         "Name=tag-key,Values='Name'" \
         "Name=tag-key,Values='Application'" \
         "Name=tag:Mode,Values='Auto'" \
         "Name=tag:Keep,Values='Yes'" \
--query "Volumes[*].{VolumeID:VolumeId, \
                         Name:Tags[?Key==\\\`Name\\\`].Value, \
                     SnapRate:Tags[?Key==\\\`SnapRate\\\`].Value}" \
--region $REGIONID`
echo "=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=" >> $LOGFILE
for VOL in $(echo "${VOLUMES}" | jq -c '.[]'); do
    VOLID=`echo ${VOL} | jq -r '.VolumeID'`
    echo "Volume ID: $VOLID" >> $LOGFILE
    SNAPRATE=`echo ${VOL} | jq '.SnapRate' | jq -r .[0]`
    echo "SnapRate: $SNAPRATE" >> $LOGFILE
    IFS='/' read -r -a SNAPARRAY <<< "$SNAPRATE"
    CYCLE_VALUE=${SNAPARRAY[CYCLE]}
    if [ $CYCLE -lt ${#CYCLE_TIME_ARRAY[@]} ]
    then
      timeaway=$((${CYCLE_TIME_ARRAY[$CYCLE]} * $CYCLE_VALUE * -1))
    else
      timeaway=-10080
    fi
    #This time is in Zulu because the start_time on Snapshot is also Zulu
    THRESHOLD=`date -d "+$timeaway minutes"`
    SNAPS=`aws ec2 describe-snapshots --owner-ids self \
        --filters "Name=tag:Mode,Values='Auto'" \
                  "Name=volume-id,Values=$VOLID" \
                  "Name=tag:Cycle,Values=$CURRENT_CYCLE" \
                  "Name=status,Values=completed" \
        --query "Snapshots[*].{SnapshotId:SnapshotId, \
                              Description:Description, \
                                StartTime:StartTime, \
                                    Cycle:Tags[?Key==\\\`Cycle\\\`].Value}" \
        --region $REGIONID`
    for SNAP in $(echo "${SNAPS}" | jq -c '.[]'); do
        SNAPID=`echo ${SNAP} | jq -r '.SnapshotId'`
        START_TIME_STRING=`echo ${SNAP} | jq -r '.StartTime'`
        START_TIME=`date -d $START_TIME_STRING`
        if [[ $(date -d "$START_TIME" +%s) < $(date -d "$THRESHOLD" +%s) ]]
        then
            echo "Delete SnapshotID: $SNAPID" >> $LOGFILE
            OUTPUT=`aws ec2 delete-snapshot --snapshot-id $SNAPID --region $REGIONID`
            echo $OUTPUT >> $LOGFILE
        fi
    done
    echo "--------------------------" >> $LOGFILE
done

AWS WAF log4j query

How to query AWS WAF log for log4j attacks 1. Setup your Athena table using this instruction https://docs.aws.amazon.com/athena/latest/ug/wa...