Monday, August 27, 2018

Installing ECE - Offline

Installing Elastic Cloud Enterprise - Offline

How to install Elastic Cloud Enterprise on your own AWS EC2 Instance running RHEL7 using your private Docker registry.

References

  • https://www.elastic.co/guide/en/cloud-enterprise/current/ece-installing-offline.html
  • https://www.elastic.co/guide/en/cloud-enterprise/current/ece-prereqs.html
  • https://www.elastic.co/guide/en/cloud-enterprise/current/ece-uninstall.html
  • https://discuss.elastic.co/t/uid-gid-error-on-install/142633
  • http://embshd.blogspot.com/2018/08/installing-private-docker-registry.html
  • https://success.docker.com/article/using-systemd-to-control-the-docker-daemon
  • https://www.elastic.co/guide/en/cloud-enterprise/current/ece-retrieve-passwords.html

Setup

  1. Create groups (ECE cannot be installed with UID or GID less than 1000)
    sudo groupadd -g 1010 elastic
    sudo groupadd -g 1011 docker
    
  2. Create user, elastic and add it to groups wheel and docker
    sudo useradd -g elastic -M -N -u 1010 elastic
    sudo usermod -aG wheel elastic
    sudo usermod -aG docker elastic
    sudo usermod -L elastic
    
  3. Check result of user elastic
    sudo su elastic
    id
    
  4. Expected result
    1
    2
    3
    4
    
    uid=1010(elastic) 
    gid=1010(elastic)
    groups=1010(elastic),10(wheel),1011(docker)
    context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
    
  5. Install any patches and install docker
    sudo yum update
    sudo yum install docker
    
  6. Set SELinux to permissive
  7. Add Cert Authority public certs (if we're using Self-Signed cert then just copy the Public cert and rename it as ca.crt)
    1
    2
    3
    4
    5
    6
    cd /etc/docker/certs.d
    sudo mkdir 10.10.10.10:443
    sudo chmod 755 10.10.10.10:443
    cd 10.10.10.10:443
    sudo touch ca.crt
    sudo chmod 666 ca.crt
    
  8. Make /mnt/data and /mnt/data/docker available for elastic user
    sudo install -o elastic -g elastic -d -m 700 /mnt/data
    sudo install -o elastic -g elastic -d -m 700 /mnt/data/docker
    
  9. Enable docker debugging by editing /etc/docker/daemon.json
    {
      "debug": true
    }
    
  10. Configure Docker Daemon options here (/etc/systemd/system/docker.service.d/docker.conf), create this directory and file
    mkdir /etc/systemd/system/docker.service.d
    touch /etc/systemd/system/docker.service.d/docker.conf
    
  11. Add following lines to the above file (172.17.42.1/16 is private bridge for Docker)
    [Unit]
    Description=Docker Service
    After=multi-user.target
    
    [Service]
    ExecStart=
    ExecStart=/usr/bin/docker daemon -g /mnt/data/docker --storage-driver=overlay --bip=172.17.42.1/16
    
  12. Add to path
    export PATH=$PATH:/usr/bin/docker:/mnt/data/docker
    
  13. Add link
    ln -s /usr/libexec/docker/docker-proxy-current /usr/bin/docker-proxy
    
  14. Edit /etc/sysctl.conf, 32 GB is minium requirement
    1. vm.max_map_count should be 1 per 128KB of system memory
      1. 262144 = 32 GB
      2. 524288 = 64GB
      3. 1048576 = 128 GB
      4. 2097152 = 256GB
    2. Once updated, reload it
      sysctl -p
      
  15. Verify that fs.may_detach_mounts = 1 in /etc/sysctl.conf
    cat /proc/sys/fs/may_detach_mounts
    
  16. Verify that net.ipv4.ip_forware = 1 in /etc/sysctl.conf
    cat /proc/sys/net/ipv4/ip_forward
    
  17. Edit /etc/security/limits.conf
    *                soft    nofile         1024000
    *                hard    nofile         1024000
    *                soft    memlock        unlimited
    *                hard    memlock        unlimited
    elastic          soft    nofile         1024000
    elastic          hard    nofile         1024000
    elastic          soft    memlock        unlimited
    elastic          hard    memlock        unlimited
    root             soft    nofile         1024000
    root             hard    nofile         1024000
    root             soft    memlock        unlimited
    
  18. Register and start docker service
    sudo systemctl daemon-reload
    sudo systemctl enable docker.service
    sudo systemctl start docker.service
    
  19. Obtain the ECE install script from Elastic and update file permission
    sudo chmod 777 elastic-cloud-enterprise.sh
    
  20. Run it with --docker-registry flag
    sudo su elastic
    bash elastic-cloud-enterprise.sh install --docker-registry 10.105.142.17:443 --debug
    
  21. Expected Result
  22. I did not get the expected output of Admin password since timed out. See /mnt/data/elastic/logs/. Instead I had to manually pull this information out of the json file. See step 24.
    [2018-08-23 18:37:41,204][INFO ][no.found.bootstrap.BootstrapInitial] Creating Admin Console Elasticsearch backend {}
    [2018-08-23 18:37:41,451][INFO ][no.found.bootstrap.ServiceLayerBootstrap] Waiting for [ensuring-plan] to complete. Retrying every [1 second] (cause: [org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /clusters/42953c05d2f243a5baa8c3047c710f95/plans/status]) {}
    [2018-08-23 18:37:48,637][INFO ][no.found.bootstrap.ServiceLayerBootstrap] Waiting for [ensuring-plan] to complete. Retrying every [1 second] (cause: [java.lang.Exception: not yet started]) {}
    [2018-08-23 19:07:41,323][ERROR][no.found.bootstrap.BootstrapInitial$] Unhandled error. {}
    java.util.concurrent.TimeoutException: Futures timed out after [30 minutes]
            at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:223)
            at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
            at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
            at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
            at scala.concurrent.Await$.result(package.scala:190)
            at no.found.bootstrap.BootstrapInitial.bootstrapServiceLayer(BootstrapInitial.scala:880)
            at no.found.bootstrap.BootstrapInitial.bootstrap(BootstrapInitial.scala:650)
            at no.found.bootstrap.BootstrapInitial$.delayedEndpoint$no$found$bootstrap$BootstrapInitial$1(BootstrapInitial.scala:1215)
            at no.found.bootstrap.BootstrapInitial$delayedInit$body.apply(BootstrapInitial.scala:1209)
            at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
            at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
            at scala.App$$anonfun$main$1.apply(App.scala:76)
            at scala.App$$anonfun$main$1.apply(App.scala:76)
            at scala.collection.immutable.List.foreach(List.scala:392)
            at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
            at scala.App$class.main(App.scala:76)
            at no.found.bootstrap.BootstrapInitial$.main(BootstrapInitial.scala:1209)
            at no.found.bootstrap.BootstrapInitial.main(BootstrapInitial.scala)
    
  23. At the end, I was left with 14 Containers where 13 are kept running and 1 is exited.
    frc-cloud-uis-cloud-ui                  Up 0.0.0.0:12400->5601/tcp , 0.0.0.0:12443->5643/tcp
    frc-admin-consoles-admin-console  Up 0.0.0.0:12300->12300/tc p, 0.0.0.0:12343->12343/tcp
    frc-curators-curator                  Up  
    frc-constructors-constructor                 Up  
    frc-services-forwarders-services-forwarder Up 0.0.0.0:9244->9244/tcp, 0.0.0.0:12344->12344/tcp
    frc-beats-runners-beats-runner   Up  
    frc-allocators-allocator   Up  
    frc-directors-director    Up 0.0.0.0:2112->2112/tcp 
    frc-proxies-proxy    Up 0.0.0.0:9200->9200/tcp, 0.0.0.0:9243->9243/tcp, 0.0.0.0:9300->9300/tcp, 0.0.0.0:9343->9343/tcp
    frc-blueprints-blueprint   Up  
    frc-runners-runner    Up  
    frc-client-forwarders-client-forwarder  Up  
    frc-zookeeper-servers-zookeeper   Up 0.0.0.0:2191->2191/tcp, 0.0.0.0:12191->12191/tcp, 0.0.0.0:12898->12898/tcp, 0.0.0.0:13898->13898/tcp
    elastic-cloud-enterprise-bootstrap-1.1.4 Exit  
    
  24. Install jq
    yum install jq
    
  25. Retrieve password
    jq -r '.adminconsole_root_password' /mnt/data/elastic/bootstrap-state/bootstrap-secrets.json
  26. Go to http://127.0.0.1:12400 and Log in as "root"

  27. If something goes wrong, you can retry after removing containers and images
    1
    2
    3
    4
    docker stop $(docker ps -a -q)
    docker rm -f frc-runners-runner frc-allocators-allocator $(docker ps -a -q)
    docker rmi $(docker images -a -q)
    sudo rm -rf /mnt/data/elastic/* 
    
  28. You can find install logs here:
    /mnt/data/elastic/logs/

Tuesday, August 14, 2018

Installing Private Docker Registry

Installing private docker registry for off-line use

This is in preparation for installing off-line Elastic Cloud Enterprise. 

References


Preparation

This setup requires 3 servers
  1. Server A: internet connected where we'll gather our source docker images
  2. Server B: Off-line, where we'll host our Docker private registry
  3. Server C: Off-line, where we'll pull from our Server B's registry
We assume that you have local repo that is available to download Docker software. 

Setup

On all three servers
  1. (Optional) If you don't have RHEL subscription, you'll need to add CentOS-extras
    1. Create this file: /etc/yum.repos.d/centos.repo
    2. Add this content to it:
      [CentOS-extras]
      name=CentOS-7-Extras
      mirrorlist=http://mirrorlist.centos.org/?release=7&arch=$basearch&repo=extras&infra=$infra
      #baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
      gpgcheck=0
      gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
      
  2. Disable SELINUX (if you don't disable, you have to permit Docker to register port)
    1. Go to this file: /etc/selinux/config
    2. Update this line:
      SELINUX=permissive
      
  3. (Optional) Disable IPTABLES - you can also just open up ports for Docker use
    chkconfig iptables off
    service iptables stop
    
  4. Install Docker from Repo
    yum install docker
    
  5. Enable Docker service
    sudo systemctl enable docker.service
    
  6. Start Docker Services
    sudo systemctl start docker.service
    
  7. To check status
    sudo systemctl status docker.service
    
On Server A (with internet connection)

  1. Pull down necessary images
    docker pull registry-1.docker.io/distribution/registry:2.0
    docker pull docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:1.1.4
    docker pull docker.elastic.co/cloud-assets/elasticsearch:6.3.0-0
    docker pull docker.elastic.co/cloud-assets/kibana:6.3.0-0
    
  2. Save all the images to current directory
    docker save -o registry2.docker registry-1.docker.io/distribution/registry:2.0
    docker save -o ece_1.1.4.docker docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:1.1.4
    docker save -o es_6.3 docker.elastic.co/cloud-assets/elasticsearch:6.3.0-0
    docker save -o kibana_6.3.docker docker.elastic.co/cloud-assets/kibana:6.3.0-0
    
  3. If you've made any error, you can delete images via this command you can provide individual image ID or clear all
    docker rmi $(docker images -a -q)
    
  4. You can list all images via this command
    docker images
    
  5. Transfer these .docker files to Server B

On Server B (without internet connection)

  1. Load all the .docker files
    docker load -i registry2.docker
    docker load -i ece_1.1.4.docker
    docker load -i es_6.3.docker
    docker load -i kibana_6.3.docker
  2. Create Self-Signed Cert
    1. Prepare Cert Configure file
    2. Create a new file: /etc/ssl/mycert.conf
    3. Paste this content and update according to your situation
      [req]
      distinguished_name = req_distinguished_name
      x509_extensions = v3_req
      prompt = no
      [req_distinguished_name]
      C = US
      ST = VA
      L = SomeCity
      O = MyCompany
      OU = MyDivision
      CN = www.company.com
      [v3_req]
      keyUsage = keyEncipherment, dataEncipherment
      extendedKeyUsage = serverAuth
      subjectAltName = @alt_names
      [alt_names]
      DNS.1 = www.company.net
      DNS.2 = company.net
      IP.1 = 10.10.10.10
      
  3. Go to /etc/ssl and run this command
    openssl req -x509 -nodes -days 730 -newkey rsa:2048 -keyout mycert.private -out mycert.cert -config mycert.conf -extensions 'v3_req'
    
  4. Move these 2 new files (private and cert) into cert sub-folder (/etc/ssl/certs)
  5. Start the registry
    1
    2
    3
    4
    5
    6
    7
    8
    9
    sudo docker run -d \
      --restart=always \
      --name registry \
      -v /etc/ssl/certs:/certs \
      -e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
      -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/mycert.cert \
      -e REGISTRY_HTTP_TLS_KEY=/certs/mycert.private \
      -p 443:443 \
      registry:2
    
    1. Line 3: Name of this new registry
    2. Line 4: associates /etc/ssl/certs of host to the Docker container
  6. Few helpful commands
    1. Status of Registry
      sudo docker ps -a
      
    2. Stop Registry
      sudo docker container stop registry
      
    3. Delete Registry
      sudo docker rm CONTAINER_ID
      
  7. Tag all the available images
    docker tag docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:1.1.4 10.10.10.10:443/cloud-enterprise/elastic-cloud-enterprise:1.1.4
    docker tag docker.elastic.co/cloud-assets/elasticsearch:6.3.0-0 10.10.10.10:443/cloud-assets/elasticsearch:6.3.0-0
    docker tag docker.elastic.co/cloud-assets/kibana:6.3.0-0 10.10.10.10:443/cloud-assets/kibana:6.3.0-0
    
  8. Push the tagged images
    docker push 10.10.10.10:443/cloud-enterprise/elastic-cloud-enterprise:1.1.4
    docker push 10.10.10.10:443/cloud-assets/elasticsearch:6.3.0-0
    docker push 10.10.10.10:443/cloud-assets/kibana:6.3.0-0
    
On Server C: Non-internet, non private registry

  1. Create a new folder under /etc/docker/certs.d/ use the same name as the host:port of Server B
    mkdir /etc/docker/certs.d/10.10.10.10:443
    
  2. Copy mycert.cert from Server B (step 4 above) to directory and call it ca.crt
  3. Pull from Private Registry
    docker pull 10.10.10.10:443/cloud-enterprise/elastic-cloud-enterprise:1.1.4
    
  4. Result
    605ce1bd3f31: Pull complete
    8319863bba65: Pull complete
  5. API Calls: you can also do this to interact with Private Registry
    1. Look up all available images
      https://10.10.10.10/v2/_catalog
      
      Output
      
      {
      "repositories":[
       "cloud-assets/elasticsearch",
       "cloud-assets/kibana",
       "cloud-enterprise/elastic-cloud-enterprise"]
      }
      
    2. Get details on an image
      https://10.10.10.10/v2/cloud-assets/kibana/tags/list
      
      Output
      
      {
      "name":"cloud-assets/kibana",
      "tags":["6.3.0-0"]
      }

Wednesday, August 1, 2018

Installing Zeppelin

Installing Stand alone Zeppelin for EMR

This is a supplement to the AWS blog of the same subject. In our scenario, we will have a standalone server running Zeppelin. We also have an EMR cluster. These are all running on a non-internet accessible VPC. These are the steps we took to make this happen. The necessary files were obtained on a internet connected machine and introduced via S3 bucket.

  1. Launch a new EC2 Linux (RedHat) instance, we'll call this Zeppelin instance.
  2. Attach a security group that can talk to itself, call it zeppelin.
  3. Install AWS CLI. Use this instructions. Here's a summary:
    curl -O https://bootstrap.pypa.io/get-pip.py
    python get-pip.py --user
    export PATH=~/.local/bin:$PATH
    source ~/.bash_profile
    pip install awscli --upgrade --user

  4. Install JDK 8 Developer:

    yum install java-1.8.0-openjdk-devel.x86_64
  5. This will appear at /etc/alternatives/java_sdk_openjdk
  6. Create a new directory: /home/ec2-user/zeppelin-notebook
  7. Download Zeppelin from Apache: http://apache.mirrors.tds.net/zeppelin/zeppelin-0.8.0/zeppelin-0.8.0-bin-all.tgz
  8. Extract Content
    tar -zxvf zeppelin-0.8.0-bin-all.tgz
  9. To make it simpler later on we're going to move this new directory to /home/ec2-user/zeppelin
    mv zeppelin-0.8.0-bin-all /home/ec2-user/zeppelin
  10. Download Spark from Apache
    http://www.gtlib.gatech.edu/pub/apache/spark/spark-2.3.1/spark-2.3.1-bin-hadoop2.7.tgz
  11. Extract Content and move it to this directory
    tar -zxvf spark-2.3.1-bin-hadoop2.7.tgz
    mv spark-2.3.1-bin-hadoop2.7.tgz /home/ec2-user/prereqs/spark
  12. Go to EMR and launch a new cluster
    1. Go to Advanced Option
    2. Select Hadoop, Hive, and Spark
    3. Add 2 Custom JAR type Steps
      Name: Hadoopconf
      JAR location: command-runner.jar
      Arguments: aws s3 cp /etc/hadoop/conf/ s3://<YOUR_S3_BUCKET>/hadoopconf --recursive
      Action on Failure: Continue
      Name: hive-site
      JAR location: command-runner.jar
      Arguments: aws s3 cp /etc/hive/conf/hive-site.xml s3://<YOUR_S3_BUCKET>/hiveconf/hive-site.xml
      Action on failure: Continue
      
  13. Leave rest as default. I prefer to put them in the same subnet as my Zeppelin Instance.
  14. Be sure to attach security group, ES that we created above as Additional Security Group.
  15. Launch cluster
  16. Go to Steps and wait for them to complete
  17. Go back to Zeppelin Instance
  18. Download hadoopconf from S3 located used in step 12
    aws s3 sync s3://<YOUR_S3_BUCKET>/hadoopconf /home/ec2-user/hadoopconf
    
  19. Download hive-site from S3 to zeppelin conf directory
    aws s3 sync s3://<YOUR_S3_BUCKET>/hiveconf/hive-site.xml /home/ec2-user/zeppelin/conf/hive-site.xml
    
  20. Make a copy of /home/ec2-user/zeppelin/conf/zeppelin-env.sh.template as zeppelin-env.sh
  21. Add following to the top of the file:
    export JAVA_HOME=/etc/alternatives/java_sdk_openjdk
    export MASTER=yarn-client
    export HADOOP_CONF_DIR=/home/ec2-user/hadoopconf
    export ZEPPELIN_NOTEBOOK_DIR=/home/ec2-user/zeppelin-notebook
    export SPARK_HOME=/home/ec2-user/prereqs/spark
    
  22. Make a copy of /home/ec2-user/zeppelin/conf/zeppelin-site.xml.template as zeppelin-site.xml
  23. Edit the following entry
    <name>zeppelin.notebook.dir</name>
    <value>/home/ec2-user/zeppelin-notebook</value>
    
  24. Make a copy of /home/ec2-user/prereqs/spark/conf/spark-env.sh.template as spark-env.sh
  25. Add the following to the top:
    export HADOOP_CONF_DIR=/home/ec2-user/hadoopconf
    export JAVA_HOME=/etc/alternatives/java_sdk_openjdk
    
  26. Start it
    sudo bin/zeppelin-daemon.sh start
    
  27. Tail the log file you find here (/home/ec2-user/zeppelin/logs) and wait for Zeppelin to start
  28. To check the status of daemon (you can also do stop or restart in place of status)
    sudo bin/zeppelin-daemon.sh status
    
  29. While it's doing that, let's download jersey-client (Spark Dependency) - probably could use newer version, but I'm just going to go with the one in the documentation
    wget http://central.maven.org/maven2/com/sun/jersey/jersey-client/1.13/jersey-client-1.13.jar
    
  30. Put this here /dependencies/jersey-client1.13.jar
  31. Log into Zeppelin (it's on port 8080)
  32. Go to Interpreter
  33. Go to Spark and click Edit
  34. Add the jersey-client we downloaded as a Dependency use the full path of /dependencies/jersey-client1.13.jar
  35. Click Save
  36. Now we're going to download the sample note here
  37. Also download this csv file and move it to your own S3 bucket location. 
  38. Go back home
  39. Click import Note
  40. Select the sample note we downloaded on step 36
  41. When the note comes up, update the csv file location to your own S3 bucket location where you moved this file in step 37
  42. Run it

AWS WAF log4j query

How to query AWS WAF log for log4j attacks 1. Setup your Athena table using this instruction https://docs.aws.amazon.com/athena/latest/ug/wa...