Monday, December 31, 2018

PowerShell Move Eventlogs to S3

Moving Eventlogs to S3

This can also be done to any remote location. And in the code below, I export as CSV, but you can also move as CAB files, but I prefer to be able to natively read these files without extracting them first. On most servers, this is a scheduled task that is set to run every hour because how fast our Security logs fill up.

Here is a sample of "get-eventlog -list" output


  Max(K) Retain OverflowAction        Entries Log                                                                                                                        
  ------ ------ --------------        ------- ---                                                                                                                        
  20,480      0 OverwriteAsNeeded          25 Application                                                                                                                
  20,480      0 OverwriteAsNeeded           0 HardwareEvents                                                                                                             
     512      7 OverwriteOlder              0 Internet Explorer                                                                                                          
  20,480      0 OverwriteAsNeeded           0 Key Management Service                                                                                                     
  20,480      0 OverwriteAsNeeded     225,262 Security                                                                                                                   
  20,480      0 OverwriteAsNeeded          45 System                                                                                                                     
  15,360      0 OverwriteAsNeeded         450 Windows PowerShell 

Actual Code to backup Eventlogs


#####################
#
# Export all eventlogs as CSV
# Clears logs after export
#
#####################
#This command will gather all the available Logs (not the actual logs themselves)
$evLogs = get-eventlog -list
$currentTime = $(get-date)
$thisDate = $currentTime.GetDateTimeFormats()[5]
#get date/time information and pad the numbers
$year = ($currentTime.Year | out-string).trim().padleft(4,"0")
$month = ($currentTime.Month | out-string).trim().padleft(2,"0")
$day = ($currentTime.Day | out-string).trim().padleft(2,"0")
$hour = ($currentTime.hour | out-string).trim().padleft(2,"0")
$min = ($currentTime.Minute | out-string).trim().padleft(2,"0")

try{
    if((test-path "e:") -eq $false){
        $rootDir = "c:\temp"
    }else{
        $rootDir = "e:\temp"
    }

    if((test-path $rootDir) -eq $false){
        mkdir $rootDir -Force
    }
    ##This is my target bucket
    $targetBucket = "s3://my-server-logs" 
    $targetPrefix = $targetBucket + "/" + $env:COMPUTERNAME + "/" + $year + "/" + $month + "/" + $day

    foreach($log in $evLogs){
        if($log.entries.count -gt 0){
            $filename = $thisdate + "-" + $hour + $min + "-" + $log.log + ".csv"
            $sourcefilename = $rootDir + "\" + $filename
            $targetfilename = $targetPrefix + "/" + $filename
            $events = get-eventlog -log $log.Log
            $events | Export-Clixml $sourcefilename
            if(Test-Path $sourcefilename){
                Clear-EventLog -logname $log.Log    
                aws s3 mv $sourcefilename $targetfilename
            }
        }
    }
}catch{
    ##Put error catching here...
}

PowerShell Cleanup C Drive

Cleaning C Drive

I do this on a Windows 2012R2 server.

Move the archived eventlogs to S3 bucket

#Archives are located here by default
$archiveFiles = Get-ChildItem -path C:\windows\system32\winevt -include "Archive*" -Recurse
#This is the S3 bucket where we'll keep these logs
$targetBucket = "s3://my-server-logs"
#We are going to organize these logs files under the computer name and date stamp
$targetPrefix = $targetBucket + "/" + $env:COMPUTERNAME
foreach($item in $archiveFiles){
    $splitoutput = $item.name.split("-")
    $year = $splitoutput[2]
    $month = $splitoutput[3]
    $day = $splitoutput[4]
    if(($year -match "\d{4}") -and ($month -match "\d{2}") -and ($day -match "\d{2}")){
        $targetFile = $targetPrefix + "/" + $year + "/" + $month + "/" + $day + "/" + $item.name
        aws s3 mv $item.fullname $targetFile
    }
}


Cleanup applied service packs and updates

This will prevent rollback ability so only do this if you've verified patches didn't break anything.

##Remove superseded and unused system files
dism.exe /online /Cleanup-Images /StartComponentCleanup
##All existing service packs and updates cannot be uninstalled after this update
dism.exe /online /Cleanup-Images /StartComponentCleanup /ReserBase
##Service packs cannot be uninstalled after this command
dism.exe /online /Cleanup-Images /SPSuperseded


Relocate Software Distribution Directory



##Stop Windows Update Service
net stop wuauserv
##Rename current software distribution directory
rename-item C:\windows\SoftwareDistribution SoftwareDistribution.old
##Create a new location for this distribution directory
mkdir E:\Windows-SoftwareDistribution
##Make a link 
cmd /c mklink /J C:\Windows\SoftwareDistribution "E:\Windows-SoftwareDistribution"
##Start service
net start wuauserv
rmdir C:\windows\SoftwareDistribution.old -confirm

PowerShell Folder Size Checker

PowerShell folder size checker. 

I use this on Windows 2012R2. A small script to analyze the size of each sub-directory. Output is in a text file, recommend using Excel to analyze the output.

#######################################
#
# Get tree like drive size output
#
# Modify the last line to reflect the 
#  folder starting point and the depth
#  of sub-folders to output
#
# Your mileage will vary depends on 
#  the permissions you have to 
#  sub-directories.
#  But you can do the math afterward.
########################################
# Create a new blank output file
$log = "e:\temp\foldersize.txt"
out-file -FilePath $log -InputObject ""

# This is the main function and within it a recursive calls
function get-folderSize(){
    param(
        $path = "e:\temp",
        $depth = 3
    )
    $log = "e:\temp\foldersize.txt"
    $output = ""
    $output = Get-ChildItem -path $path -ErrorAction SilentlyContinue
    $totalSize = 0
    #look through every item individually
    foreach($item in $output){
        #If it is a folder, then call a nested function of itself
        if($item.psIsContainer -eq $true){
            $totalSize = $totalSize + (Get-folderSize -Path $item.fullname -depth ($depth-1))
        }else{
            $totalSize = $totalSize + $item.length
        }
    }
    #As long as depth is greater than 0 then present the files in that depth in the output
    # Use the delimiter of double colon when parsing output
    if($depth -ge 0){
        $inputObject = $path + "::" + [math]::round($totalSize/1MB,2) + "::" + $depth
        out-file -FilePath $log -InputObject $inputObject -Append
    }
    return $totalSize
}
get-folderSize -path C:\Windows -depth 1


Sample output:

... 
C:\Windows\System::0.03::0
C:\Windows\System32::2885.63::0
C:\Windows\SystemResources::3.23::0
C:\Windows\SysWOW64::992.38::0
C:\Windows\TAPI::0::0
C:\Windows\Tasks::0::0
C:\Windows\Temp::1525.79::0
C:\Windows\ToastData::0.01::0
C:\Windows\tracing::0::0
C:\Windows\Vss::0.01::0
C:\Windows\Web::2.68::0
C:\Windows\WinSxS::7392.18::0
C:\Windows::15796.69::1

Friday, December 14, 2018

re:Invent hints

Things to know before attending re:Invent

I was able to attend re:Invent 2018, but unfortunately I was ill prepared for the event. This is a note to self.

  1. Get the Mobile App here
    1. There are games here (for prizes)
    2. You can register for sessions here
    3. You can get shuttle schedules here
  2. Register early for sessions
  3. Ensure official aws emails aren't being sent to junk folder in your email account because you will get daily newsletters during re:invent event, these are useful
  4. Some hands-on sessions will be paid sessions but they will all fill up quick. These are the most useful sessions at re:invent.
  5. You can get stuff at sessions such as t-shirts, amazon credits, or whatever new gadget they will release. Clues for these will be in the daily emails. 
  6. Hands-on sessions will use your registered email to log into labs, don't forget this
  7. If you want SWAGs, go early if you want your size
  8. If you are aws certified, go to certification lounge (more SWAGs)
  9. There are multiple vendor expos (there was one in Aria and one in Venetian), they have different activities in both locations
  10. You don't have to go watch the keynote speakers at Venetian, you can watch it on a big screen at any other sponsored hotels. I saw no advantage in attending in person
  11. The best free meal (breakfast and lunch) locations are as follows (IMHO): Aria > Venetian > MGM
  12. Get to Las Vegas after 3PM, you could leave your luggage at the hotel until your room is ready, but by arriving after 3PM your room will be ready and you may as well get settled before beginning on all the walking you'll be doing
  13. Bring a "light-weight" laptop. 17 inch is nice, but it'll get heavy real quick
  14. And bring a padded backpack while you are at it
  15. Wear comfortable shoes
  16. Most people are wearing button shirt or t-shirt and jeans. 
  17. All slides and recordings of every sessions will be available a week after the event.


Thursday, December 13, 2018

EBS Auto Snapshot Script in Bash

How to automatically maintain EBS snapshots in AWS. 

This solution was developed before AWS released EBS Lifecycles feature and this solution relies on an EC2 instance. As it's fairly light-weight it can run on a free tier instance type. I also have a Python version of this script that run as an Lambda function which I will post in another time.

In our environment, the lowest frequency at which a snapshot was required was every 30 minutes and at the highest frequency, weekly. So the frequency of required snaps looks like this:
  • M: Half-Hourly, bottom of the hour
  • H: Hourly: top of the hour
  • D: Daily: At 0100
  • W: Weekly: At 0100 on Sunday morning
So using the 4 period above, we create a new tag called SnapRate to be applied to any volumes that we want to mange using this method. 
  • M/H/D/W, for example 12/24/7/6
Where we provide the total number of snapshots we want to maintain PER that period. In the above example, we would have 12 half-hourly (for past 6 hours), 24 daily (for past 24 hours), 7 daily (for past week), and 6 weekly (for past 6 weeks). You can picture the snapshots like this:
  • MMMMMM H H H D        D         D                W                      W                                W 

Support Function: 

This is get_cycle.bsh, this returns the array index for the snapshot tag defined above.

#!/bin/bash
DOM=$(TZ='America/New_York' date +%-d)
DOW=$(TZ='America/New_York' date +%u)
HOUR=$(TZ='Amreica/New_York' date +%-H)
MINU=$(TZ='America/New_York' date +%-M)
if [ $DOW == 1 ] && [ $HOUR == 1 ] && [ $MINU -lt 30 ]
then
  CYCLE="3"
elif [ $HOUR == 1 ] && [ $MINU -lt 30 ]
then
  CYCLE="2"
elif [ $MINU -lt 30 ]
then
  CYCLE="1"
else
  CYCLE="0"
fi
echo $CYCLE

Main Functions:

Use this, aws_create_snapshots.bsh in your crontab to automatically create snapshots, set the crontab to run every 30 mins.

#!/bin/bash
###########################################
#
# aws_create_snapshots.bsh
# Description: Ability to create snapshots in the account for all volumes that fits tag criteria
#
# Last edit: 11/21/2018
#
# Prereq:
#  aws cli
#  jq
###########################################
source /etc/profile
# In my case aws cli was installed under /usr/local/bin
PATH=$PATH:/usr/local/bin
echo "Path is set to $PATH"
echo "AWS_CA_BUNDLE is at $AWS_CA_BUNDLE"
LOG_DATE=`TZ='America/New_York' date +%Y%m%d`
LOGFILE="${BASH_SOURCE%/*}/logs/aws_create_snapshots_$LOG_DATE.log"
echo "Logs are written to $LOGFILE"
FORMAT_DATE=`TZ='America/New_York' date +%Y%m%d.%H%M%S`
echo "=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=" >> $LOGFILE
echo "Script Start" >> $LOGFILE
echo "Current Time: $FORMAT_DATE" >> $LOGFILE
CYCLE=`${BASH_SOURCE%/*}/get_cycle.bsh`
CYCLE_WORD_ARRAY=("Half-Hourly" "Hourly" "Daily" "Weekly")
CYCLE_LETT_ARRAY=("M" "H" "D" "W")
CYCLE_TIME_ARRAY=(30 60 1440 10080)
echo "Current Cycle: ${CYCLE_WORD_ARRAY[CYCLE]}" >> $LOGFILE
EC2_AZ=`curl http://169.254.169.254/latest/meta-data/placement/availability-zone`
REGIONID="`echo \"$EC2_AZ\" | sed 's/[a-z]$//'`"
VOLUMES=`aws ec2 describe-volumes \
--filter "Name=tag-key,Values='SnapRate'" \
         "Name=tag-key,Values='Name'" \
         "Name=tag-key,Values='Application'" \
         "Name=tag:Mode,Values='Auto'" \
         "Name=tag:Keep,Values='Yes'" \
         "Name='attachment.status',Values='attached'" \
--query "Volumes[*].{VolumeID:VolumeId, \
                         Name:Tags[?Key==\\\`Name\\\`].Value, \
                     Function:Tags[?Key==\\\`Function\\\`].Value, \
                  Application:Tags[?Key==\\\`Application\\\`].Value, \
                     SnapRate:Tags[?Key==\\\`SnapRate\\\`].Value}" \
--region $REGIONID`
echo "=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=" >> $LOGFILE
for VOL in $(echo "${VOLUMES}" | jq -c '.[]'); do
    VOLID=`echo ${VOL} | jq -r '.VolumeID'`
    echo "Volume ID: $VOLID" >> $LOGFILE
    APP=`echo ${VOL} | jq '.Application' | jq -r .[0]`
    echo "Application: $APP" >> $LOGFILE
    FUNCTION=`echo ${VOL} | jq '.Function' | jq -r .[0]`
    echo "Function: $FUNCTION" >> $LOGFILE
    NAME_PRE=`echo ${VOL} | jq '.Name' | jq -r .[0]`
    SEP="_"
    NAME=$NAME_PRE$SEP$FORMAT_DATE
    echo "New Name: $NAME" >> $LOGFILE
    SNAPRATE=`echo ${VOL} | jq '.SnapRate' | jq -r .[0]`
    echo "SnapRate: $SNAPRATE" >> $LOGFILE
    THIS_CYCLE=$CYCLE
    NEW_CYCLE_VALUE=0
    IFS='/' read -r -a SNAPARRAY <<< "$SNAPRATE"
    NEW_CYCLE_VALUE=${SNAPARRAY[THIS_CYCLE]}
    while [ $NEW_CYCLE_VALUE -eq 0 ] && [ $THIS_CYCLE -gt 0 ]; do
        let THIS_CYCLE-=1
        NEW_CYCLE_VALUE=${SNAPARRAY[THIS_CYCLE]}
    done
    if [ $NEW_CYCLE_VALUE -gt 0 ]
    then
        THIS_CYCLE_LETT=${CYCLE_LETT_ARRAY[THIS_CYCLE]}
        TIME_SEED=${CYCLE_TIME_ARRAY[THIS_CYCLE]}
        THIS_CYCLE_TIME=`echo "$((TIME_SEED * NEW_CYCLE_VALUE))"`
        EXPIREDATE=`date -d "+$THIS_CYCLE_TIME minutes" "+%Y%m%d.%H%M%S"`
        echo "VOL CYCLE: $THIS_CYCLE_LETT" >> $LOGFILE
        echo "EXPIRE DATE: $EXPIREDATE" >> $LOGFILE
        OUTPUT=`aws ec2 create-snapshot --volume-id $VOLID --description $NAME --region $REGIONID`
        echo $OUTPUT >> $LOGFILE
        SNAPID=`echo ${OUTPUT} | jq '.SnapshotId' -r`
        if [ $SNAPID != "null" ] && [ -n $SNAPID ]
        then
            # Tags in JSON
            TAGS=`echo '[{"Key":"Name","Value":"'$NAME'"},\
{"Key":"Cycle","Value":"'$THIS_CYCLE_LETT'"},\
{"Key":"Keep","Value":"Yes"},\
{"Key":"Source","Value":"'$VOLID'"},\
{"Key":"Application","Value":"'$APPLICATION'"},\
{"Key":"Function","Value":"'$FUNCTION'"},\
{"Key":"Mode","Value":"Auto"},\
{"Key":"ExpirationDate","Value":"'$EXPIREDATE'"}]'`
            echo "New SnapID: $SNAPID" >> $LOGFILE
            OUTPUT2=`aws ec2 create-tags --resources $SNAPID --tags $TAGS --region $REGIONID`
            echo $OUTPUT2 >> $LOGFILE
        fi
    else
        echo "No snapshot requested in SnapRate" >> $LOGFILE
    fi

    echo "---------" >> $LOGFILE
done

Use this, aws_cleanup_snapshots.bsh in your crontab to automatically delete snapshots that are beyond the requested SnapRate value. Also, set this to run every 30 minutes. 

#!/bin/bash
################################################################
#
# aws_cleanup_snapshots.bsh
# Description: Clean up snapshots
#
# Last edit: 12/7/2018
#
# Prereq:
#  aws cli
#  jq
################################################################
source /etc/profile
PATH=$PATH:/usr/local/bin
echo "Path is set to $PATH"
echo "AWS_CA_BUNDLE is at $AWS_CA_BUNDLE"
LOG_DATE=`TZ='America/New_York' date +%Y%m%d`
LOGFILE="${BASH_SOURCE%/*}/logs/aws_cleanup_snapshots_$LOG_DATE.log"
echo "Logs are written to $LOGFILE"
FORMAT_DATE=`TZ='America/New_York' date +%Y%m%d.%H%M%S`
echo "=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=" >> $LOGFILE
echo "Script Start" >> $LOGFILE
echo "Current Time: $FORMAT_DATE" >> $LOGFILE
CYCLE=`${BASH_SOURCE%/*}/get_cycle.bsh`
CYCLE_WORD_ARRAY=("Half-Hourly" "Hourly" "Daily" "Weekly")
CYCLE_LETT_ARRAY=("M" "H" "D" "W")
CYCLE_TIME_ARRAY=(30 60 1440 10080)
CURRENT_CYCLE=${CYCLE_LETT_ARRAY[CYCLE]}
echo "Current Cycle: ${CYCLE_WORD_ARRAY[CYCLE]}" >> $LOGFILE
EC2_AZ=`curl http://169.254.169.254/latest/meta-data/placement/availability-zone`
REGIONID="`echo \"$EC2_AZ\" | sed 's/[a-z]$//'`"
VOLUMES=`aws ec2 describe-volumes \
--filter "Name=tag-key,Values='SnapRate'" \
         "Name=tag-key,Values='Name'" \
         "Name=tag-key,Values='Application'" \
         "Name=tag:Mode,Values='Auto'" \
         "Name=tag:Keep,Values='Yes'" \
--query "Volumes[*].{VolumeID:VolumeId, \
                         Name:Tags[?Key==\\\`Name\\\`].Value, \
                     SnapRate:Tags[?Key==\\\`SnapRate\\\`].Value}" \
--region $REGIONID`
echo "=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=" >> $LOGFILE
for VOL in $(echo "${VOLUMES}" | jq -c '.[]'); do
    VOLID=`echo ${VOL} | jq -r '.VolumeID'`
    echo "Volume ID: $VOLID" >> $LOGFILE
    SNAPRATE=`echo ${VOL} | jq '.SnapRate' | jq -r .[0]`
    echo "SnapRate: $SNAPRATE" >> $LOGFILE
    IFS='/' read -r -a SNAPARRAY <<< "$SNAPRATE"
    CYCLE_VALUE=${SNAPARRAY[CYCLE]}
    if [ $CYCLE -lt ${#CYCLE_TIME_ARRAY[@]} ]
    then
      timeaway=$((${CYCLE_TIME_ARRAY[$CYCLE]} * $CYCLE_VALUE * -1))
    else
      timeaway=-10080
    fi
    #This time is in Zulu because the start_time on Snapshot is also Zulu
    THRESHOLD=`date -d "+$timeaway minutes"`
    SNAPS=`aws ec2 describe-snapshots --owner-ids self \
        --filters "Name=tag:Mode,Values='Auto'" \
                  "Name=volume-id,Values=$VOLID" \
                  "Name=tag:Cycle,Values=$CURRENT_CYCLE" \
                  "Name=status,Values=completed" \
        --query "Snapshots[*].{SnapshotId:SnapshotId, \
                              Description:Description, \
                                StartTime:StartTime, \
                                    Cycle:Tags[?Key==\\\`Cycle\\\`].Value}" \
        --region $REGIONID`
    for SNAP in $(echo "${SNAPS}" | jq -c '.[]'); do
        SNAPID=`echo ${SNAP} | jq -r '.SnapshotId'`
        START_TIME_STRING=`echo ${SNAP} | jq -r '.StartTime'`
        START_TIME=`date -d $START_TIME_STRING`
        if [[ $(date -d "$START_TIME" +%s) < $(date -d "$THRESHOLD" +%s) ]]
        then
            echo "Delete SnapshotID: $SNAPID" >> $LOGFILE
            OUTPUT=`aws ec2 delete-snapshot --snapshot-id $SNAPID --region $REGIONID`
            echo $OUTPUT >> $LOGFILE
        fi
    done
    echo "--------------------------" >> $LOGFILE
done

Tuesday, September 4, 2018

Launching Application Designer in PT8.55

Launching App Designer in PT8.55


On a brand new PT8.55 server, if you try to launch Application Designer (E:\pt85515\pt\ps_home8.55.15\bin\client\winx86\pside.exe) and try to log into your server, you will most likely receive this error.

There's probably nothing wrong with your password. However, make sure tnsping to this DB returns successfully and that you can still log in via PIA using this user ID and password. If this is the case continue...

Open Configuration Manager 8.55 (on our server, this is located at E:\pt85515\pt\ps_home8.55.15\bin\client\winx86\pscfg.exe)

Provide the Database Name, Connect ID, and Connect Password. Try again.

If you get this error, that means your Connect ID or Password is invalid.

Monday, August 27, 2018

Installing ECE - Offline

Installing Elastic Cloud Enterprise - Offline

How to install Elastic Cloud Enterprise on your own AWS EC2 Instance running RHEL7 using your private Docker registry.

References

  • https://www.elastic.co/guide/en/cloud-enterprise/current/ece-installing-offline.html
  • https://www.elastic.co/guide/en/cloud-enterprise/current/ece-prereqs.html
  • https://www.elastic.co/guide/en/cloud-enterprise/current/ece-uninstall.html
  • https://discuss.elastic.co/t/uid-gid-error-on-install/142633
  • http://embshd.blogspot.com/2018/08/installing-private-docker-registry.html
  • https://success.docker.com/article/using-systemd-to-control-the-docker-daemon
  • https://www.elastic.co/guide/en/cloud-enterprise/current/ece-retrieve-passwords.html

Setup

  1. Create groups (ECE cannot be installed with UID or GID less than 1000)
    sudo groupadd -g 1010 elastic
    sudo groupadd -g 1011 docker
    
  2. Create user, elastic and add it to groups wheel and docker
    sudo useradd -g elastic -M -N -u 1010 elastic
    sudo usermod -aG wheel elastic
    sudo usermod -aG docker elastic
    sudo usermod -L elastic
    
  3. Check result of user elastic
    sudo su elastic
    id
    
  4. Expected result
    1
    2
    3
    4
    
    uid=1010(elastic) 
    gid=1010(elastic)
    groups=1010(elastic),10(wheel),1011(docker)
    context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
    
  5. Install any patches and install docker
    sudo yum update
    sudo yum install docker
    
  6. Set SELinux to permissive
  7. Add Cert Authority public certs (if we're using Self-Signed cert then just copy the Public cert and rename it as ca.crt)
    1
    2
    3
    4
    5
    6
    cd /etc/docker/certs.d
    sudo mkdir 10.10.10.10:443
    sudo chmod 755 10.10.10.10:443
    cd 10.10.10.10:443
    sudo touch ca.crt
    sudo chmod 666 ca.crt
    
  8. Make /mnt/data and /mnt/data/docker available for elastic user
    sudo install -o elastic -g elastic -d -m 700 /mnt/data
    sudo install -o elastic -g elastic -d -m 700 /mnt/data/docker
    
  9. Enable docker debugging by editing /etc/docker/daemon.json
    {
      "debug": true
    }
    
  10. Configure Docker Daemon options here (/etc/systemd/system/docker.service.d/docker.conf), create this directory and file
    mkdir /etc/systemd/system/docker.service.d
    touch /etc/systemd/system/docker.service.d/docker.conf
    
  11. Add following lines to the above file (172.17.42.1/16 is private bridge for Docker)
    [Unit]
    Description=Docker Service
    After=multi-user.target
    
    [Service]
    ExecStart=
    ExecStart=/usr/bin/docker daemon -g /mnt/data/docker --storage-driver=overlay --bip=172.17.42.1/16
    
  12. Add to path
    export PATH=$PATH:/usr/bin/docker:/mnt/data/docker
    
  13. Add link
    ln -s /usr/libexec/docker/docker-proxy-current /usr/bin/docker-proxy
    
  14. Edit /etc/sysctl.conf, 32 GB is minium requirement
    1. vm.max_map_count should be 1 per 128KB of system memory
      1. 262144 = 32 GB
      2. 524288 = 64GB
      3. 1048576 = 128 GB
      4. 2097152 = 256GB
    2. Once updated, reload it
      sysctl -p
      
  15. Verify that fs.may_detach_mounts = 1 in /etc/sysctl.conf
    cat /proc/sys/fs/may_detach_mounts
    
  16. Verify that net.ipv4.ip_forware = 1 in /etc/sysctl.conf
    cat /proc/sys/net/ipv4/ip_forward
    
  17. Edit /etc/security/limits.conf
    *                soft    nofile         1024000
    *                hard    nofile         1024000
    *                soft    memlock        unlimited
    *                hard    memlock        unlimited
    elastic          soft    nofile         1024000
    elastic          hard    nofile         1024000
    elastic          soft    memlock        unlimited
    elastic          hard    memlock        unlimited
    root             soft    nofile         1024000
    root             hard    nofile         1024000
    root             soft    memlock        unlimited
    
  18. Register and start docker service
    sudo systemctl daemon-reload
    sudo systemctl enable docker.service
    sudo systemctl start docker.service
    
  19. Obtain the ECE install script from Elastic and update file permission
    sudo chmod 777 elastic-cloud-enterprise.sh
    
  20. Run it with --docker-registry flag
    sudo su elastic
    bash elastic-cloud-enterprise.sh install --docker-registry 10.105.142.17:443 --debug
    
  21. Expected Result
  22. I did not get the expected output of Admin password since timed out. See /mnt/data/elastic/logs/. Instead I had to manually pull this information out of the json file. See step 24.
    [2018-08-23 18:37:41,204][INFO ][no.found.bootstrap.BootstrapInitial] Creating Admin Console Elasticsearch backend {}
    [2018-08-23 18:37:41,451][INFO ][no.found.bootstrap.ServiceLayerBootstrap] Waiting for [ensuring-plan] to complete. Retrying every [1 second] (cause: [org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /clusters/42953c05d2f243a5baa8c3047c710f95/plans/status]) {}
    [2018-08-23 18:37:48,637][INFO ][no.found.bootstrap.ServiceLayerBootstrap] Waiting for [ensuring-plan] to complete. Retrying every [1 second] (cause: [java.lang.Exception: not yet started]) {}
    [2018-08-23 19:07:41,323][ERROR][no.found.bootstrap.BootstrapInitial$] Unhandled error. {}
    java.util.concurrent.TimeoutException: Futures timed out after [30 minutes]
            at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:223)
            at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
            at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
            at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
            at scala.concurrent.Await$.result(package.scala:190)
            at no.found.bootstrap.BootstrapInitial.bootstrapServiceLayer(BootstrapInitial.scala:880)
            at no.found.bootstrap.BootstrapInitial.bootstrap(BootstrapInitial.scala:650)
            at no.found.bootstrap.BootstrapInitial$.delayedEndpoint$no$found$bootstrap$BootstrapInitial$1(BootstrapInitial.scala:1215)
            at no.found.bootstrap.BootstrapInitial$delayedInit$body.apply(BootstrapInitial.scala:1209)
            at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
            at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
            at scala.App$$anonfun$main$1.apply(App.scala:76)
            at scala.App$$anonfun$main$1.apply(App.scala:76)
            at scala.collection.immutable.List.foreach(List.scala:392)
            at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
            at scala.App$class.main(App.scala:76)
            at no.found.bootstrap.BootstrapInitial$.main(BootstrapInitial.scala:1209)
            at no.found.bootstrap.BootstrapInitial.main(BootstrapInitial.scala)
    
  23. At the end, I was left with 14 Containers where 13 are kept running and 1 is exited.
    frc-cloud-uis-cloud-ui                  Up 0.0.0.0:12400->5601/tcp , 0.0.0.0:12443->5643/tcp
    frc-admin-consoles-admin-console  Up 0.0.0.0:12300->12300/tc p, 0.0.0.0:12343->12343/tcp
    frc-curators-curator                  Up  
    frc-constructors-constructor                 Up  
    frc-services-forwarders-services-forwarder Up 0.0.0.0:9244->9244/tcp, 0.0.0.0:12344->12344/tcp
    frc-beats-runners-beats-runner   Up  
    frc-allocators-allocator   Up  
    frc-directors-director    Up 0.0.0.0:2112->2112/tcp 
    frc-proxies-proxy    Up 0.0.0.0:9200->9200/tcp, 0.0.0.0:9243->9243/tcp, 0.0.0.0:9300->9300/tcp, 0.0.0.0:9343->9343/tcp
    frc-blueprints-blueprint   Up  
    frc-runners-runner    Up  
    frc-client-forwarders-client-forwarder  Up  
    frc-zookeeper-servers-zookeeper   Up 0.0.0.0:2191->2191/tcp, 0.0.0.0:12191->12191/tcp, 0.0.0.0:12898->12898/tcp, 0.0.0.0:13898->13898/tcp
    elastic-cloud-enterprise-bootstrap-1.1.4 Exit  
    
  24. Install jq
    yum install jq
    
  25. Retrieve password
    jq -r '.adminconsole_root_password' /mnt/data/elastic/bootstrap-state/bootstrap-secrets.json
  26. Go to http://127.0.0.1:12400 and Log in as "root"

  27. If something goes wrong, you can retry after removing containers and images
    1
    2
    3
    4
    docker stop $(docker ps -a -q)
    docker rm -f frc-runners-runner frc-allocators-allocator $(docker ps -a -q)
    docker rmi $(docker images -a -q)
    sudo rm -rf /mnt/data/elastic/* 
    
  28. You can find install logs here:
    /mnt/data/elastic/logs/

Tuesday, August 14, 2018

Installing Private Docker Registry

Installing private docker registry for off-line use

This is in preparation for installing off-line Elastic Cloud Enterprise. 

References


Preparation

This setup requires 3 servers
  1. Server A: internet connected where we'll gather our source docker images
  2. Server B: Off-line, where we'll host our Docker private registry
  3. Server C: Off-line, where we'll pull from our Server B's registry
We assume that you have local repo that is available to download Docker software. 

Setup

On all three servers
  1. (Optional) If you don't have RHEL subscription, you'll need to add CentOS-extras
    1. Create this file: /etc/yum.repos.d/centos.repo
    2. Add this content to it:
      [CentOS-extras]
      name=CentOS-7-Extras
      mirrorlist=http://mirrorlist.centos.org/?release=7&arch=$basearch&repo=extras&infra=$infra
      #baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
      gpgcheck=0
      gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
      
  2. Disable SELINUX (if you don't disable, you have to permit Docker to register port)
    1. Go to this file: /etc/selinux/config
    2. Update this line:
      SELINUX=permissive
      
  3. (Optional) Disable IPTABLES - you can also just open up ports for Docker use
    chkconfig iptables off
    service iptables stop
    
  4. Install Docker from Repo
    yum install docker
    
  5. Enable Docker service
    sudo systemctl enable docker.service
    
  6. Start Docker Services
    sudo systemctl start docker.service
    
  7. To check status
    sudo systemctl status docker.service
    
On Server A (with internet connection)

  1. Pull down necessary images
    docker pull registry-1.docker.io/distribution/registry:2.0
    docker pull docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:1.1.4
    docker pull docker.elastic.co/cloud-assets/elasticsearch:6.3.0-0
    docker pull docker.elastic.co/cloud-assets/kibana:6.3.0-0
    
  2. Save all the images to current directory
    docker save -o registry2.docker registry-1.docker.io/distribution/registry:2.0
    docker save -o ece_1.1.4.docker docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:1.1.4
    docker save -o es_6.3 docker.elastic.co/cloud-assets/elasticsearch:6.3.0-0
    docker save -o kibana_6.3.docker docker.elastic.co/cloud-assets/kibana:6.3.0-0
    
  3. If you've made any error, you can delete images via this command you can provide individual image ID or clear all
    docker rmi $(docker images -a -q)
    
  4. You can list all images via this command
    docker images
    
  5. Transfer these .docker files to Server B

On Server B (without internet connection)

  1. Load all the .docker files
    docker load -i registry2.docker
    docker load -i ece_1.1.4.docker
    docker load -i es_6.3.docker
    docker load -i kibana_6.3.docker
  2. Create Self-Signed Cert
    1. Prepare Cert Configure file
    2. Create a new file: /etc/ssl/mycert.conf
    3. Paste this content and update according to your situation
      [req]
      distinguished_name = req_distinguished_name
      x509_extensions = v3_req
      prompt = no
      [req_distinguished_name]
      C = US
      ST = VA
      L = SomeCity
      O = MyCompany
      OU = MyDivision
      CN = www.company.com
      [v3_req]
      keyUsage = keyEncipherment, dataEncipherment
      extendedKeyUsage = serverAuth
      subjectAltName = @alt_names
      [alt_names]
      DNS.1 = www.company.net
      DNS.2 = company.net
      IP.1 = 10.10.10.10
      
  3. Go to /etc/ssl and run this command
    openssl req -x509 -nodes -days 730 -newkey rsa:2048 -keyout mycert.private -out mycert.cert -config mycert.conf -extensions 'v3_req'
    
  4. Move these 2 new files (private and cert) into cert sub-folder (/etc/ssl/certs)
  5. Start the registry
    1
    2
    3
    4
    5
    6
    7
    8
    9
    sudo docker run -d \
      --restart=always \
      --name registry \
      -v /etc/ssl/certs:/certs \
      -e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
      -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/mycert.cert \
      -e REGISTRY_HTTP_TLS_KEY=/certs/mycert.private \
      -p 443:443 \
      registry:2
    
    1. Line 3: Name of this new registry
    2. Line 4: associates /etc/ssl/certs of host to the Docker container
  6. Few helpful commands
    1. Status of Registry
      sudo docker ps -a
      
    2. Stop Registry
      sudo docker container stop registry
      
    3. Delete Registry
      sudo docker rm CONTAINER_ID
      
  7. Tag all the available images
    docker tag docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:1.1.4 10.10.10.10:443/cloud-enterprise/elastic-cloud-enterprise:1.1.4
    docker tag docker.elastic.co/cloud-assets/elasticsearch:6.3.0-0 10.10.10.10:443/cloud-assets/elasticsearch:6.3.0-0
    docker tag docker.elastic.co/cloud-assets/kibana:6.3.0-0 10.10.10.10:443/cloud-assets/kibana:6.3.0-0
    
  8. Push the tagged images
    docker push 10.10.10.10:443/cloud-enterprise/elastic-cloud-enterprise:1.1.4
    docker push 10.10.10.10:443/cloud-assets/elasticsearch:6.3.0-0
    docker push 10.10.10.10:443/cloud-assets/kibana:6.3.0-0
    
On Server C: Non-internet, non private registry

  1. Create a new folder under /etc/docker/certs.d/ use the same name as the host:port of Server B
    mkdir /etc/docker/certs.d/10.10.10.10:443
    
  2. Copy mycert.cert from Server B (step 4 above) to directory and call it ca.crt
  3. Pull from Private Registry
    docker pull 10.10.10.10:443/cloud-enterprise/elastic-cloud-enterprise:1.1.4
    
  4. Result
    605ce1bd3f31: Pull complete
    8319863bba65: Pull complete
  5. API Calls: you can also do this to interact with Private Registry
    1. Look up all available images
      https://10.10.10.10/v2/_catalog
      
      Output
      
      {
      "repositories":[
       "cloud-assets/elasticsearch",
       "cloud-assets/kibana",
       "cloud-enterprise/elastic-cloud-enterprise"]
      }
      
    2. Get details on an image
      https://10.10.10.10/v2/cloud-assets/kibana/tags/list
      
      Output
      
      {
      "name":"cloud-assets/kibana",
      "tags":["6.3.0-0"]
      }

Wednesday, August 1, 2018

Installing Zeppelin

Installing Stand alone Zeppelin for EMR

This is a supplement to the AWS blog of the same subject. In our scenario, we will have a standalone server running Zeppelin. We also have an EMR cluster. These are all running on a non-internet accessible VPC. These are the steps we took to make this happen. The necessary files were obtained on a internet connected machine and introduced via S3 bucket.

  1. Launch a new EC2 Linux (RedHat) instance, we'll call this Zeppelin instance.
  2. Attach a security group that can talk to itself, call it zeppelin.
  3. Install AWS CLI. Use this instructions. Here's a summary:
    curl -O https://bootstrap.pypa.io/get-pip.py
    python get-pip.py --user
    export PATH=~/.local/bin:$PATH
    source ~/.bash_profile
    pip install awscli --upgrade --user

  4. Install JDK 8 Developer:

    yum install java-1.8.0-openjdk-devel.x86_64
  5. This will appear at /etc/alternatives/java_sdk_openjdk
  6. Create a new directory: /home/ec2-user/zeppelin-notebook
  7. Download Zeppelin from Apache: http://apache.mirrors.tds.net/zeppelin/zeppelin-0.8.0/zeppelin-0.8.0-bin-all.tgz
  8. Extract Content
    tar -zxvf zeppelin-0.8.0-bin-all.tgz
  9. To make it simpler later on we're going to move this new directory to /home/ec2-user/zeppelin
    mv zeppelin-0.8.0-bin-all /home/ec2-user/zeppelin
  10. Download Spark from Apache
    http://www.gtlib.gatech.edu/pub/apache/spark/spark-2.3.1/spark-2.3.1-bin-hadoop2.7.tgz
  11. Extract Content and move it to this directory
    tar -zxvf spark-2.3.1-bin-hadoop2.7.tgz
    mv spark-2.3.1-bin-hadoop2.7.tgz /home/ec2-user/prereqs/spark
  12. Go to EMR and launch a new cluster
    1. Go to Advanced Option
    2. Select Hadoop, Hive, and Spark
    3. Add 2 Custom JAR type Steps
      Name: Hadoopconf
      JAR location: command-runner.jar
      Arguments: aws s3 cp /etc/hadoop/conf/ s3://<YOUR_S3_BUCKET>/hadoopconf --recursive
      Action on Failure: Continue
      Name: hive-site
      JAR location: command-runner.jar
      Arguments: aws s3 cp /etc/hive/conf/hive-site.xml s3://<YOUR_S3_BUCKET>/hiveconf/hive-site.xml
      Action on failure: Continue
      
  13. Leave rest as default. I prefer to put them in the same subnet as my Zeppelin Instance.
  14. Be sure to attach security group, ES that we created above as Additional Security Group.
  15. Launch cluster
  16. Go to Steps and wait for them to complete
  17. Go back to Zeppelin Instance
  18. Download hadoopconf from S3 located used in step 12
    aws s3 sync s3://<YOUR_S3_BUCKET>/hadoopconf /home/ec2-user/hadoopconf
    
  19. Download hive-site from S3 to zeppelin conf directory
    aws s3 sync s3://<YOUR_S3_BUCKET>/hiveconf/hive-site.xml /home/ec2-user/zeppelin/conf/hive-site.xml
    
  20. Make a copy of /home/ec2-user/zeppelin/conf/zeppelin-env.sh.template as zeppelin-env.sh
  21. Add following to the top of the file:
    export JAVA_HOME=/etc/alternatives/java_sdk_openjdk
    export MASTER=yarn-client
    export HADOOP_CONF_DIR=/home/ec2-user/hadoopconf
    export ZEPPELIN_NOTEBOOK_DIR=/home/ec2-user/zeppelin-notebook
    export SPARK_HOME=/home/ec2-user/prereqs/spark
    
  22. Make a copy of /home/ec2-user/zeppelin/conf/zeppelin-site.xml.template as zeppelin-site.xml
  23. Edit the following entry
    <name>zeppelin.notebook.dir</name>
    <value>/home/ec2-user/zeppelin-notebook</value>
    
  24. Make a copy of /home/ec2-user/prereqs/spark/conf/spark-env.sh.template as spark-env.sh
  25. Add the following to the top:
    export HADOOP_CONF_DIR=/home/ec2-user/hadoopconf
    export JAVA_HOME=/etc/alternatives/java_sdk_openjdk
    
  26. Start it
    sudo bin/zeppelin-daemon.sh start
    
  27. Tail the log file you find here (/home/ec2-user/zeppelin/logs) and wait for Zeppelin to start
  28. To check the status of daemon (you can also do stop or restart in place of status)
    sudo bin/zeppelin-daemon.sh status
    
  29. While it's doing that, let's download jersey-client (Spark Dependency) - probably could use newer version, but I'm just going to go with the one in the documentation
    wget http://central.maven.org/maven2/com/sun/jersey/jersey-client/1.13/jersey-client-1.13.jar
    
  30. Put this here /dependencies/jersey-client1.13.jar
  31. Log into Zeppelin (it's on port 8080)
  32. Go to Interpreter
  33. Go to Spark and click Edit
  34. Add the jersey-client we downloaded as a Dependency use the full path of /dependencies/jersey-client1.13.jar
  35. Click Save
  36. Now we're going to download the sample note here
  37. Also download this csv file and move it to your own S3 bucket location. 
  38. Go back home
  39. Click import Note
  40. Select the sample note we downloaded on step 36
  41. When the note comes up, update the csv file location to your own S3 bucket location where you moved this file in step 37
  42. Run it

Tuesday, May 1, 2018

Cross account access to AWS CodeCommit

How to access Account A's CodeCommit repository from Account B


Prerequisite:
  • You need to have been granted a Role in Account A as Cross Account access with the necessary permission to access all or specific CodeCommit repository
  • The instruction is meant for access from Windows, adjust accordingly if you are doing this from Linux or Mac

Create an User in your own account with Programmatic Access with (at least) STS:AssumeRole










Download Access Key


  1. Go to IAM >> Users
  2. Go to Security Credentials
  3. Create Access Key

Install AWS CLI and install with default settings

https://aws.amazon.com/cli/

Install git and install with default settings

https://git-scm.com/downloads

You can use this for reference if you are not familiar with git

http://rogerdudler.github.io/git-guide/

Configure AWS CLI

  1. Open Command Prompt (or powershell)
  2. Run aws configure, provide as follows
  3. Go to your personal directory (c:\users\name\.aws) - you will need to show hidden files
  4. Open credentials files in text editor and replace content with following:
  5. AAAAAAAAAA - Account ID of Account A
  6.  99999999999999 - Your Given Role Name in Account A
  7.  xxxxxxxxxxxxx - keys generated from above

Configure Git

  1. Go to your personal directory (c:\users\name) and open .gitconfig in text editor
  2. Make sure your region matches
  3. Open git command prompt and test your connection
  4. Press Cancel when you receive username/password prompt from Git Credential Manager (if installed)






AWS WAF log4j query

How to query AWS WAF log for log4j attacks 1. Setup your Athena table using this instruction https://docs.aws.amazon.com/athena/latest/ug/wa...