Category Archives: Linux

Use the same Dockerfile – please

As Containers have progressed, Docker has stood out as the defacto standard. As many of the laggards are coming up to speed, Dockerfiles can be seen in many open source repositories. With the addition to that, I have seen a few repos with a Dockerfile-prod, Dockerfile-dev, Dockerfile-test, etc.

Additionally, you find an IF clause in the CMD statement such as:

CMD if [ "$REACT_NODE_ENV" = "development" ]; \
  then yarn dev;  \
  else yarn build && yarn start --only=production; \
  fi

To those repositories, I have one daunting question:

WHY?

Container start commands can be overwritten at run time. Here is how to do it:

# Dockerfile
FROM alpine:3.6

CMD echo "production start command"
# docker-compose.yml
version: '3'
services:
  dev-server:
    build: .
    command: echo 'development start command'
# kubernetes-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cool-app
  namespace: cool-app-testing
  labels:
    app: cool-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cool-app
  template:
    metadata:
      labels:
        app: cool-app
    spec:
      containers:
      - name: cool-app
        image: alpine:3.6
        command:
        - echo
        - 'testing start command'

Jenkins-x on home kubernetes cluster

Jenkins-x appears to be the next big thing in CI/CD workflows – especially if you develop applications on kubernetes. There were a few tweaks I needed to do to set it up:

  1. I had to manually create Persistent Volumes (no big deal, below are what I have for my NFS share)
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins
      namespace: jx
      labels:
        app: jenkins
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-chartmuseum
      namespace: jx
      labels:
        app: jenkins-x-chartmuseum
    spec:
      capacity:
        storage: 100Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-chartmuseum"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-docker-registry
      namespace: jx
      labels:
        app: jenkins-x-docker-registry
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-docker-registry"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-mongodb
      namespace: jx
      labels:
        app: jenkins-x-mongodb
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-mongodb"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-nexus
      namespace: jx
      labels:
        app: jenkins-x-nexus
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-nexus"
  2. I had to modify the install line
    jx install --ingress-namespace ingress-nginx --domain wplr.rocks --tls-acme true --skip-ingress
  3. I had to modify the jenkins-x-mongodb deployment to use image mongo:3.6.5-jessie. Still wonder why people use bitnami images.
  4. I had to
    securityContext:
      runAsUser: 1024

    on the jenkins-x-nexus deployment. The container was trying to change permissions on my nfs mount. Not sure why my Synology NFS does not like permission changes.

 

Even after those changes, jenkins-x-monocular-ui still fails to start -_- … I have run out of time for now. More debugging to come later (MUCH MUCH later)

Allowing outside access to Home Kubernetes Cluster

After I created a home kubernetes cluster, I immediately wanted to allow external access to pods/services/ingresses hosted inside the cluster. One must be aware that in bare metal environments, there is no receiver of an api call to create a load balancer. Since there is not a scriptable environment available to kubernetes, kubernetes cannot request external IP addresses or provision resources that one has come to expect in cloud environments such as AWS. This is a huge bummer – especially since dynamically built environments are fun to have.

To route traffic to web services inside of kubernetes, you have to options available: ingress and service. Services can be exposed via NodePort, LoadBalancer, or ClusterIP. In bare metal, LoadBalancer would never work (unless you coded your own API call to configure a load balancer outside of kubernetes). ClusterIP might work if you want to manage a routing table somewhere inside your network, and NodePort will work if you want to manage a port forwarding table on your router. None of these options are fun for home labs on bare metal. An Ingress is like a layer 7 firewall in that it reads the hostname and path of the incoming HTTP request and can route to applicable services. This works great for a dynamic environment where I am going to host multiple http endpoints.

The overall view of this traffic is going to be: Internet > Router > k8s Ingress > k8s Service > Pod(s).

To create an ingress in kubernetes, you have to make it a Service. In cloud environments, the Ingress is created as type LoadBalancer in home labs, we create this as type NodePort and port forward on the router to any node in the kubernetes cluster.

$ kubectl get svc -n ingress-nginx
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
default-http-backend   ClusterIP   10.102.173.184           80/TCP                       3d
ingress-nginx          NodePort    10.110.162.247           80:30746/TCP,443:32641/TCP   3d

In my home lab, I am port forwarding on my router 0.0.0.0:80 -> <any_node>:30746 and 0.0.0.0:443 -> <any_node>:32641.

Since I have a non-traditional home router (a Linux server with two nics), I had to either enter these into iptables or I could improve upon that by setting up a load balancer such as nginx. nginx will allow me to port-forward load balance across all my nodes and have an easy config file to edit. Because I also want to use cert-manager with Let’s Encrypt free SSLs, I chose to use the TCP stream server of nginx.

Another hiccup (so to speak) in home based labs, is that ISPs give DHCP addresses. So when my internet IP changes, I need to update the DNS of all my http endpoints. Rather than doing that, I have all my home urls (*.wplr.rocks) CNAME to a single hostname which I update with a script with the correct IP.

Home Kubernetes cluster

So I admit it – I am completely obsessed with Kubernetes. All of the web app/api deployment challenges in the past 20 years have been somewhat solved with the Kubernetes orchestration and scheduling platform. Kubernetes brings fault-tolerance, and Highly available systems if set up correctly (i.e. use a Kubernetes cloud installer). I enjoy having the power to write yaml and apply it to infrastructure and it eventually becomes what I told it to be. No longer do I need to write the scripts to do it – it does it for me 🙂

In my first kubernetes home cluster, I re-used my home router and my old media center as a single node + master, but I was hit by a 2 year old Kubernetes bug. It appears my old PC was out in the open. Since my 2 year old son likes to press power buttons, he came over and constantly pressed the power button on my Kubernetes master. This caused me to find a small mini computer that I could place in my cabinet out of view. I finally settled on this as my computer of choice. At $150/each for 4 cores, 4GB RAM, and 32GB nve, I thought it was a good deal and ample to run a few containers per node with a nfs-backed storage array.

These little machines booted via UEFI pxe (after pressing DEL for bios and selecting the Realtek boot in the save/exit page). I used this kickstart file which installed CentOS 7 minimal by passing in the ks argument to a CentOS initramfs:

After the servers installed, I ran this script:

Note: for one of the three machines I received, I had to put modprobe br_netfilter in /etc/rc.local before /etc/sysctl.d/k8s.conf would apply.

Adding a user to k8s RBAC

In order to add a user to a kubernetes cluster, we will need several things: kubectl, CA.crt and CA.key (found in your head node’s /etc/kubernetes/pki folder), and openssl.

First, create a private key for the new user. In this example, we will name the file employee.key:

openssl genrsa -out employee.key 2048

Next, we will need to create a certificate sign request – employee.csr – using the private key we just created (employee.key in this example). Make sure to specify your username and group in the -subj section (CN is for the username and O for the group).

openssl req -new -key employee.key -out employee.csr -subj "/CN=username/O=developer"

Generate the final certificate employee.crt by approving the certificate sign request, employee.csr, you made earlier. In this example, the certificate will be valid for 90 days.

openssl x509 -req -in employee.csr -CA CA.crt -CAkey CA.key -CAcreateserial -out employee.crt -days 90

Give employee.crt, employee.key, and CA.crt to the new employee and have the employee follow the below steps.

# Set up the cluster
$ kubectl config set-cluster k8s.domain.tld --server https://api.k8s.domain.tld --certificate-authority /path/to/CA.crt --embed-certs=true

# Set up the credentials (a.k.a login information)
$ kubectl config set-credentials <name> --client-certificate=/path/to/cert.crt --client-key=/path/to/cert.key --embed-certs=true

# bind login to server
$ kubectl config set-context k8s.domain.tld --cluster= k8s.domain.tld --user=<name>
# Optional: append `--namespace=<namespace>` to the command to set a default namespace.

Note: You may move the certificates to a safe location since the commands included --embed-certs=true. This saved the certs in base64 format in the kubernetes config.

Sometimes I post to my blog so I remember how to do a particular thing. This is one of those times.

Reusable containers with confd

I recently had the need to populate a file in a docker container based upon whether or not the container is in production or development. I eventually came across confd which let me populate data in files based upon particular environment variables. While confd excels with distributed key value stores, my needs (and infrastructure) is at a much simpler level.

Confd requires a few folders to store a toml (/etc/confd/conf.d/) and a template file (/etc/confd/templates/). When confd runs, it will look at the contents of each toml file in the conf.d directory and process them according to their instructions.

In my repository example, I am wanting a container to say hello to me when it senses a NAME environment variable and print out the current datetime. If no environment variable is set, only the datetime is printed out. To do this, I must create the toml file to look like this:

[template]
src = "echo.tmpl"
dest = "/echo"

This file is instructing confd to generate the echo file, place it in the root (/) and use /etc/confd/templates/echo.tmpl as the contents.

When we are building the container, we must include these configuration files and ensure confd is ran to generate the destination file. My example Dockerfile is doing just that by including all of the files in the container and running the docker-entrypoint script which is basically running confd and the newly generated file.

 andrew@wipplerxps > ~/git_repos/confd $  docker build -t blog-confd .
Sending build context to Docker daemon 57.34 kB
Step 1/9 : FROM centos:7.4.1708
 ---> 5076a7d1a386
Step 2/9 : LABEL maintainer "andrew.wippler@gmail.com"
 ---> Using cache
 ---> d712b31f7449
Step 3/9 : RUN mkdir -p /etc/confd/{conf.d,templates}
 ---> Running in f340bdcdf973
 ---> 1f0faa9b962f
Removing intermediate container f340bdcdf973
Step 4/9 : COPY docker/confd/ /etc/confd/
 ---> fb16dffc63ac
Removing intermediate container 133128cb7fc1
Step 5/9 : ADD https://github.com/kelseyhightower/confd/releases/download/v0.14.0/confd-0.14.0-linux-amd64 /usr/local/bin/confd
Downloading 17.61 MB/17.61 MB
 ---> a62b388274e6
Removing intermediate container 3f9ec343a5ab
Step 6/9 : RUN chmod +x /usr/local/bin/confd
 ---> Running in 1489dd02ea45
 ---> ab99a5fc5f95
Removing intermediate container 1489dd02ea45
Step 7/9 : COPY docker/docker-entrypoint.sh /var/local/
 ---> 16906971c8ef
Removing intermediate container 7a17a8e17e22
Step 8/9 : RUN chmod a+x /var/local/docker-entrypoint.sh
 ---> Running in 1562a6d06432
 ---> f963372159b1
Removing intermediate container 1562a6d06432
Step 9/9 : ENTRYPOINT /var/local/docker-entrypoint.sh
 ---> Running in 1b7e12c38b4c
 ---> f7d260597e0a
Removing intermediate container 1b7e12c38b4c
Successfully built f7d260597e0a
 andrew@wipplerxps > ~/git_repos/confd $  docker run blog-confd
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Backend set to env
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Starting confd
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Backend source(s) set to 
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Target config /echo out of sync
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Target config /echo has been updated
The current time is: Tue Nov 28 20:05:24 UTC 2017
 andrew@wipplerxps > ~/git_repos/confd $  docker run -e NAME="Andrew Wippler" blog-confd
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Backend set to env
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Starting confd
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Backend source(s) set to 
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Target config /echo out of sync
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Target config /echo has been updated
Hello Andrew Wippler
The current time is: Tue Nov 28 20:05:34 UTC 2017
 andrew@wipplerxps > ~/git_repos/confd $

While it is fun to say hello to yourself once in a while, I am using confd to modify an nginx.conf. When I pass in the SSL environment variable, nginx will listen on port 443 with a self signed cert and forward all HTTP traffic to HTTPS. Obviously in production, I want to use a real SSL cert. Using confd allows me to have the same docker container in development and production – the only difference being a configuration change.

Autosign Puppet certificates on AWS

Let’s face it, Puppet’s method of certificates is a pain and huge administration overkill if done manually. Thankfully, puppet has designed several methods of auto-signing certificates. One of which is via crafting a special certificate signing request and verifying the certificate signing request is genuine.

On the puppet master

Apply the following code on your puppet master. This will set up the autosign script which will verify your custom certificate signing request. If the CSR is genuine, the puppet master will sign the certificate.

  service { 'puppetserver':
    ensure => running,
    enable => true,
  }

# The file must have execute permissions
# The master will trigger this as `/etc/puppetlabs/puppet/autosign.sh FQDN`
  file { '/etc/puppetlabs/puppet/autosign.sh':
    ensure  => file,
    mode    => '0750',
    owner   => 'puppet',
    group   => 'puppet',
    content => '#!/bin/bash
HOST=$1
openssl req -noout -text -in "/etc/puppetlabs/puppet/ssl/ca/requests/$HOST.pem" | grep pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa',
  }

# This sets up the required ini setting and restarts the puppet master service
  ini_setting {'autosign nodes':
    ensure  => present,
    path    => '/etc/puppetlabs/puppet/puppet.conf',
    section => 'master',
    setting => 'autosign',
    value   => '/etc/puppetlabs/puppet/autosign.sh',
    notify  => Service['puppetserver'],
    require => File['/etc/puppetlabs/puppet/autosign.sh']
  }

On the agents

With our puppet master ready to go, we need to set up our agents to generate the custom certificate request. This can be done by editing /etc/puppetlabs/puppet/csr_attributes.yaml before running puppet with the following content:

custom_attributes:
    1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa
extension_requests:
    pp_instance_id: $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
    pp_image_name:  $(curl -s http://169.254.169.254/latest/meta-data/ami-id)

Note: The 1.2.840.113549.1.9.7 value must match the item you are grepping for in the autosigning request. This specific value in the certificate is reserved for purposes such as this.

Execution

With everything in place, the way to execute this successfully is to pass in the below as the userdata script when creating an EC2 instance:

#!/bin/sh
if [ ! -d /etc/puppetlabs/puppet ]; then
   mkdir /etc/puppetlabs/puppet
fi
cat > /etc/puppetlabs/puppet/csr_attributes.yaml << YAML
custom_attributes:
    1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa
extension_requests:
    pp_instance_id: $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
    pp_image_name:  $(curl -s http://169.254.169.254/latest/meta-data/ami-id)
YAML

An alternative method is to create a custom AMI (especially for auto-scaling groups). I use the below puppet code to create my golden AMI.

  cron { 'run aws_cert at reboot':
    command => '/aws_cert.sh',
    user    => 'root',
    special => 'reboot',
    require => File['/aws_cert.sh'],
  }

  file { '/aws_cert.sh':
    ensure  => file,
    mode    => '0755',
    content => '#!/bin/sh
if [ ! -d /etc/puppetlabs/puppet ]; then
   mkdir /etc/puppetlabs/puppet
fi
cat > /etc/puppetlabs/puppet/csr_attributes.yaml << YAML 
custom_attributes: 
  1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa 
extension_requests: 
  pp_instance_id: $(curl -s http://169.254.169.254/latest/meta-data/instance-id) 
  pp_image_name: $(curl -s http://169.254.169.254/latest/meta-data/ami-id) 
YAML 

export CERTNAME="aws-node_name-`date +%s`" 

/opt/puppetlabs/bin/puppet apply -e "ini_setting {\"certname\": \ ensure => present, \
  path => \"/etc/puppetlabs/puppet/puppet.conf\", \
  section => \"main\", \
  setting => \"certname\", \
  value   => $CERTNAME, \
  }"

/opt/puppetlabs/bin/puppet agent -t -w 5',
  }

Moving to Desktop GNU/Linux from Windows/Mac

There are many curious individuals who tinker with GNU/Linux as a Server OS and want to experience what it is like as a Desktop OS. The switch is often hindered by two obstacles:

  1. Some daily use programs are not available. (i.e. Photoshop, iTunes, etc.)
  2. The unknown of what to do if something goes wrong or what do I do to get my 3d graphics driver installed and working.

While these are valid reasons and definitely show stoppers for some, others can safely migrate to GNU/Linux.

The obstacle of programs

I like Krita as an alternative to Photoshop. The menu options are nearly the same and I do not have to install a silly theme (like I have to do in Gimp) or re-learn photo editing just to recognize where everything is at. I have successfully installed Photoshop CS4 with wine without any issues, but Krita is more featured than CS4. Darktable is also a good alternative to Photoshop RAW/bridge.

Rhythmbox connects to iPhones/iPods the same way as iTunes does, but without the store. iTunes does run on a recent version of wine quite well. Some might also want to check out Clementine.

Most every program has an alternative. Alternatives can be found via alternativeto.net or software recommendations on StackExchange.

The unknown obstacles

To use GNU/Linux successfully as the primary Desktop OS, in my opinion, one must have a desktop with worthy hardware. I consider myself an AMD guy. I like the price for performance and I rarely do CPU intensive tasks on my desktop. When AMD bought ATI, I was also happy as ATI was my favorite graphics card. Unfortunately, most Desktop GNU/Linux users are developers and need that extra performance. They have desktop workstations that have Nvidia graphics cards in them with Intel CPUs. You will often find that Desktop GNU/Linux performs better, is easier to use, and has more tutorials for Nvidia graphics cards and how to get them working.

Captive Portal Overview

I originally authored this on Aug 16, 2016 at http://unix.stackexchange.com. Considering my tutorial did not include an overview, I thought I would re-post it on my blog.


To make a captive portal appear, you need to stop all internet traffic and provide a 302 redirectto the client’s browser. To do this, you need to have a firewall (like iptables) redirect all traffic to a webserver (like nginxapache, etc) where the webserver responds with a 302 redirect to the url of your login page.

I have written a lengthy article on how to do this with a Raspberry Pi. It basically boils down to the iptables block/redirect to webserver:

iptables -t nat -A wlan0_Unknown -p tcp --dport 80 -j DNAT --to-destination 192.168.24.1

and then the webserver (nginx) redirecting to the login page:

# For iOS
if ($http_user_agent ~* (CaptiveNetworkSupport) ) {
    return 302 http://hotspot.localnet/hotspot.html;
}

# For others
location / {
    return 302 http://hotspot.localnet/;
}

iOS has to be difficult in that it needs the WISP settings. hotspot.html contents are as follows:

<!--
<?xml version="1.0" encoding="UTF-8"?>
<WISPAccessGatewayParam xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.wballiance.net/wispr_2_0.xsd">
<Redirect>
<MessageType>100</MessageType>
<ResponseCode>0</ResponseCode>
<VersionHigh>2.0</VersionHigh>
<VersionLow>1.0</VersionLow>
<AccessProcedure>1.0</AccessProcedure>
<AccessLocation>Andrew Wippler is awesome</AccessLocation>
<LocationName>MyOpenAP</LocationName>
<LoginURL>http://hotspot.localnet/</LoginURL>
</Redirect>
</WISPAccessGatewayParam>
-->

Captive Portal Restaurant Menu

I have been contacted several times in regards to my captive portal post. In India there seems to be a surge in popularity for restaurants to have an open WiFi that prompts a user to open up a menu/splash page. The caveat being the legal issues encountered when providing free, open wireless internet. In order to avoid legal issues, the device that broadcasts must be disconnected from the internet. Although I am curious if just blocking internet access for those connecting is enough. 

It seems like an interesting issue to tackle, but I think creating something out of a WiFi captive portal would be like hammering a square peg through a round hole. It might work (if given enough time and effort), but in the end, it is probably not the right tool for the job.

While writing this post, I was reminded how Google appears to be tackling this problem. On my Android phone, I let the NSA spy on my whereabouts by enabling location services. It also lets my wife pinpoint where I am physically located. With it enabled, I can visit most shops in my area and get a Google maps prompt with the business information, reviews, and a few pictures. (Side note: if you appear in court over a childish/dumb action, you validate the judge’s decision when you post a negative review of the court house. Also please do not butcher the English language when trying to review places.)

Utilizing GPS location as well as having an app that provides the information seems like the best route to go in this circumstance. An alternative would be to have an app with WiFi credentials hardcoded in, listen for when a WiFi connection is made, check to see if it matches a predefined SSID, and attempt to communicate with a local app server to process data. Of course doing something like that is outside the scope of my tutorials.

Easy unix epoch timestamps from CLI

While working on various projects and ultimately the need for a Unix timestamp for expiring swift objects in OpenStack, I needed a quick way to convert past, present, and future timestamps to the Unix epoch. Traditionally, I went to google, searched for a Unix timestamp converter, and retrieved my seconds that way. Unfortunately in exams, you are not allowed to visit external websites.

If you know how to read documentation, you will already know that the date command has this feature already built in. An excerpt from the docs is as follows:

 ...
       Show the local time for 9AM next Friday on the west coast of the US

              $ date --date='TZ="America/Los_Angeles" 09:00 next Fri'

DATE STRING
       The  --date=STRING  is  a mostly free format human readable date string
       such as "Sun, 29 Feb 2004 16:21:42 -0800" or "2004-02-29  16:21:42"  or
       even  "next Thursday".  A date string may contain items indicating cal‐
       endar date, time of day, time zone, day of week, relative  time,  rela‐
       tive date, and numbers.  An empty string indicates the beginning of the
       day.  The date string format is more complex than is easily  documented
       here but is fully described in the info documentation.
...

Further reading of the docs will point you in specifically formatting a return string by doing a date +%s. So when the time comes to expire an object from swift at 17:00 next Friday, you can do something like:

swift post container file -H 'X-Delete-On: `date +%s --date="17:00 next Friday"`'

OpenStack PS1 snippet

I have been studying for my OpenStack certification test (the COA) which is scheduled next week. One thing that was painful to keep track of was the user I was using to interface with OpenStack as the rc file you download from OpenStack does not update your PS1 prompt. I came up with the following solution and placed it in my ~/.bashrc:


function parse_os_user() {
    if [ ! "${OS_USERNAME}" == "" ]
    then
        echo "(${OS_USERNAME})"
    else
        echo ""
    fi
}

PS1='\u@\h \w `parse_os_user` $ '

OpenStack certification

On Dec 20th, I am scheduled to take my COA exam. From the exam requirements page, it appears to be a somewhat moderately difficult exam. The few points I need work on are heat templates and swift object administration. A few things I know about the exam are what are publicly available via YouTube videos of the OpenStack summit sessions.

One of my troubles of studying for exams is creating content to test myself on the objectives of the exam. I look at the requirements and say to myself, “I know that,” and nothing gets written for that aspect. One thing I have done in the past is to search Github for exam prep questions. One I have found for OpenStack is AJNOURI/COA. He also made a nifty website for his test prep questions.

A key aspect that has helped me pass all of my open book exams is to recall the locations of my troubled areas. Looking at the docs/reading the manual has often come a best practice of mine. Most of the time, exam questions are covered in the docs as the exams expect you to have read them.

Using Puppet to host a private RPM repository

A repository is a place where files are stored, indexed, and available through a package manager to anyone who has the repository information. With rpm based systems, a repository is created with a tool called createrepo. Most of the time, publicly available repositories already offer the packages your server needs. When you have a custom application you want to deploy (or even rebuild an existing application with your patches), it is best to distribute that package with a repository rather than a file share or some other means. Often a folder structure is created so that differing client OS versions can connect to the same repository and access versions compiled to that specific release. In my example below, I am not creating this folder structure as I am only serving one major release – Centos 7 – and the packages I am generating are website directories which are just a collection of portable code.

A private repository is not a tricky feat – all you have to do is serve the repository via https and require http basic authentication. You then configure the clients to connect to the repository with the basic authentication in the URL string (i.e. baseurl=https://user:pass@repo.example.com/). The HTTPS protocol is not required to serve a repository, but it does prevent network snoopers from seeing your repository credentials.

Now that we know what is needed for a private repository, we can then define it in our puppet code.

node 'repo.example.com' {

  file { '/var/yumrepos':
    ensure => directory,
  }

  createrepo { 'yumrepo':
    repository_dir => '/var/yumrepos/yumrepo',
    repo_cache_dir => '/var/cache/yumrepos/yumrepo',
    enable_cron    => false, #optional cron job to generate new rpms every 10 minutes
  }

  package { 'httpd':
    ensure => installed,
  }

  httpauth { 'repouser':
    ensure    => present,
    file      => '/usr/local/nagios/etc/htpasswd.users',
    password  => 'some-long-password',
    mechanism => basic,
    require   => Package['httpd'],
  }

  file { '/usr/local/nagios/etc/htpasswd.users':
    ensure => file,
    owner  => 'nginx',
    mode   => '0644',
  }

  class{'nginx':
    manage_repo    => true,
    package_source => 'nginx-mainline',
  }

  nginx::resource::vhost{"$::fqdn":
    www_root             => '/var/yumrepos/yumrepo',
    index_files          => [],
    autoindex            => 'on',
    rewrite_to_https     => true,
    ssl                  => true,
    auth_basic           => 'true',
    auth_basic_user_file => '/usr/local/nagios/etc/htpasswd.users',
    ssl_cert             => "/etc/puppetlabs/puppet/ssl/public_keys/$::fqdn.pem",
    ssl_key              => "/etc/puppetlabs/puppet/ssl/private_keys/$::fqdn.pem",
    vhost_cfg_prepend    => {
      'default_type'     => 'text/html',
    }
  }

}

For the above code to work, we need the required modules:

mod 'palli/createrepo', '1.1.0'
mod "puppet/nginx", "0.4.0"
mod "jamtur01/httpauth", "0.0.3"

We can then use the following declaration on our nodes to use this repository.

yumrepo {'private-repo':
  descr           => 'My Private Repo - x86_64',
  baseurl         => 'https://repouser:some-long-password@repo.example.com/',
  enabled         => 'true',
  gpgcheck        => 'false',
  metadata_expire => '1',
}

You now have a fully functional private repository – deploy your awesome software.

Repercussions from a 1.1 Tbsp DDoS

In case you missed it, the largest recorded Direct Denial of Service (DDoS) occurred. While under DDoS, a victim’s server (or servers) is under high load and cannot complete all requests that are requested by it. Basically, a DDoS victim is someone the attacker wants silenced on the internet. In order to send a DDoS of that magnitude, the attacker has to have control over many computers – a botnet. It is believed that this attack originated from over 150,000 computers in the IoT category (smart TVs, refrigerators, thermostats, etc.). Due to their poor default security, the IoT devices are easy targets for hackers who intend on adding them to their botnets. A recent article on Ars Technica points out the current issues with IoT and Linux kernel security, but with most articles of this nature, provides no clear cut solution to the problem we are experiencing. Below are my thoughts to this current situation and how it may be resolved.

We need a governing body to issue a seal of approval for IoT and anything that is compiled with the Linux kernel. Then we, as consumers, must use, buy, and encourage others to buy from the companies that have this seal. The governing body should ensure each company seeking the seal comply with the following criteria:

  1. Every new device created and sent to market has a minimum of 5 years worth of bi-monthly security patches and updates since the day of release to the public.
  2. In the event the company goes bankrupt, dissolves, or cannot support any older product they have released in the past 5 years, the company must provide schematics, instructions, or software that open source enthusiasts can recreate, patch, or upgrade the legacy product.
  3. No known vulnerability must be willingly left unpatched.
  4. When a CVE is identified on a company’s product, a test case must be created and run on that code base for every future release.
  5. A notification service must be in place when new updates are released and must be available in RSS or email form.
  6. Automatic updates should occur over HTTPS
  7. Backdoors, admin terminals, etc. should require a physical connector be applied on the device in order to grant access.

    For a potential company to get this approval, it may seem like an arduous task to get all the controls in place; however, by applying DevOps methodologies, these tasks can be a simple feat. This would require the governing body to not only enforce the list, but also have the training available to comply to this list. For this reason, I suggest the Linux Foundation to become this governing body and issue out seals of approval.