Docker-izing WordPress for Kubernetes

WordPress is amazingly popular considering how antiquated the file structure and code appears to be. Even still, it is the easiest CMS that I have used and the community has created plugins to make the copy-folder-for-a-new-theme/plugin at least tolerable. A challenge comes when one wants to use the 1990’s method of serving web applications in a more modern way (such as running inside a container on top of Kubernetes). Containers are meant to be immutable and treated as read-only. (No change to files in the container after they are built.) Containers are supposed to be a point-in-time release of software. As such, I can roll-back to a specific container version and have that specific code running. This causes a problem when one wants to use a file-dependent application such as WordPress. The best I could come up with for running WordPress in a container is a forward-only method of deploying code (basically, giving up the ability to use a previous version of code.) There is a way to keep that functionality, but it would mean storing everything (including uploads) inside an ever-growing container or using a central object store such as S3 for uploads. It would also require a re-build of the container every time a plugin is updated – which would presumably be every hour. My deployments of WordPress are so little that I can hardly justify using S3 for uploads, keeping the plugins in sync, and going backwards in time. When deploying to Kubernetes, one can scale the replicas to N copies. Keeping plugins, themes, and updates the same across all replicas will require a READ WRITE MANY (rwx) volume to be shared. This could be a GlusterFS volume or NFS, but it cannot be a AWS EBS volume or any other single-use block storage. When looking at the available WordPress images, there are three that seem interesting. With the official image, I like that I can use php-fpm and alpine. The next top two implementations of WordPress have very bloated docker files. I have come to the conclusion that my WordPress container will have to be built from scratch.
The Dockerfile is very similar to the official WordPress container. It uses php:7.2-fpm-alpine as the base image, adds in nginx, and inserts a generic wp-config.php file.
The folder structure for the container is as follows:
WordPress Container Folder
├── docker-entrypoint.sh
├── Dockerfile
├── html
│   └── ... Contents of wordpress-X.Y.Z.zip
├── nginx.conf
└── wp-config.php
It can be built by running a command similar to docker build -t andrewwippler/wordpress:latest .
nginx.conf is a very basic configuration file with gzip and cache headers. The real neat things come in the docker-entrypoint.sh file.
I borrowed the database creation script; however, since PHP was already installed in the container, I ran a few more checks in PHP rather than bash. For instance, the container places the local code in /var/www/html-original and rsyncs it to /var/www/html where the webserver sees it, but it only does this if the code in html-original is newer than html. This allows an operator to mount a storage volume at /var/www/html which can be shared across Kubernetes Deployment replicas. The code for this is:
// see if we need to copy files over
include '/var/www/html-original/wp-includes/version.php';
$dockerWPversion = $wp_version;

if (file_exists('/var/www/html/wp-includes/version.php')) {
    include '/var/www/html/wp-includes/version.php';
    $installedWPversion = $wp_version;
} else {
    $installedWPversion = '0.0.0';
}

fwrite($stderr, "dockerWPversion: $dockerWPversion - installedWPversion: $installedWPversion\n");
if(version_compare($dockerWPversion, $installedWPversion, '>')) {
    fwrite($stderr, "Installing wordpress files\n");
    exec('rsync -au /var/www/html-original/ /var/www/html');
}
I have also included a theme-only check that will update the theme if it has changed. This is necessary to update the theme files when the version of WordPress has not changed.
if (filemtime('/var/www/html-original/wp-content/themes') > filemtime('/var/www/html/wp-content/themes')) {
    fwrite($stderr, "Updating theme files\n");
    exec('rsync -au --delete-after /var/www/html-original/wp-content/themes/ /var/www/html/wp-content/themes');
}
All files I have referenced in this article are located in a gist. In addition to those files a local docker-compose.yml file might be helpful for your local development:
version: '2'
services:
  db:
    image: mariadb:10
    volumes:
      - ./tmp/db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=secretPASS

  wordpress:
    build: wordpress
    volumes:
      - ./html:/var/www/html
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    links:
      - db
    environment:
      - WORDPRESS_DB_NAME=wordpress
      - WORDPRESS_DB_HOST=db
      - WORDPRESS_DB_USER=root
      - WORDPRESS_DB_PASSWORD=secretPASS
    ports:
      - 8080:80

Kubernetes: Heapster to Metrics Server

I recently updated my kubernetes cluster from 1.10.2 to 1.11.0. I noticed heapster was being deprecated and completely removed by version 1.13.0. I thought this would be the perfect time to try out metrics-server. I had to download the git repo to apply the kubernetes yaml to my cluster. Since this is sometimes not as ideal as I would like, (I prefer kubectl apply -f http:// when it comes from a trusted source) I am writing the below for easy access in the future:


kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/auth-delegator.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/auth-reader.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/metrics-apiservice.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/metrics-server-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/metrics-server-service.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/resource-reader.yaml

Note: this is for clusters running v1.8.0 or greater.

Jenkins-x on home kubernetes cluster

Jenkins-x appears to be the next big thing in CI/CD workflows – especially if you develop applications on kubernetes. There were a few tweaks I needed to do to set it up:

  1. I had to manually create Persistent Volumes (no big deal, below are what I have for my NFS share)
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins
      namespace: jx
      labels:
        app: jenkins
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-chartmuseum
      namespace: jx
      labels:
        app: jenkins-x-chartmuseum
    spec:
      capacity:
        storage: 100Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-chartmuseum"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-docker-registry
      namespace: jx
      labels:
        app: jenkins-x-docker-registry
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-docker-registry"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-mongodb
      namespace: jx
      labels:
        app: jenkins-x-mongodb
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-mongodb"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-nexus
      namespace: jx
      labels:
        app: jenkins-x-nexus
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-nexus"
  2. I had to modify the install line
    jx install --ingress-namespace ingress-nginx --domain wplr.rocks --tls-acme true --skip-ingress
  3. I had to modify the jenkins-x-mongodb deployment to use image mongo:3.6.5-jessie. Still wonder why people use bitnami images.
  4. I had to
    securityContext:
      runAsUser: 1024

    on the jenkins-x-nexus deployment. The container was trying to change permissions on my nfs mount. Not sure why my Synology NFS does not like permission changes.

 

Even after those changes, jenkins-x-monocular-ui still fails to start -_- … I have run out of time for now. More debugging to come later (MUCH MUCH later)

DHCP IP updater

This is the script I use to change the DNS record of my home IP when it changes. I have it running once a week and have not noticed a lapse in coverage. If your ISP has DHCP configured correctly, you will receive the same IP address when you are due for a renew. Otherwise you need a script like the one below.

#!/usr/bin/ruby

require 'aws-sdk'
require 'socket'

def my_first_public_ipv4
  Socket.ip_address_list.detect{|intf| intf.ipv4? and !intf.ipv4_loopback? and !intf.ipv4_multicast? and !intf.ipv4_private?}
end

ip = my_first_public_ipv4.ip_address

unless ip.nil?

change = {
  :action => 'UPSERT',
  :resource_record_set => {
    :name => "home.andrewwippler.com",
    :type => "A",
    :ttl => 600,
    :resource_records => [{:value => ip}]
}}

route53 = Aws::Route53::Client.new(
    region: 'us-east-1'
)
route53.change_resource_record_sets({
  hosted_zone_id: '/hostedzone/XXXXXXXXXXXXXXX', # required
  change_batch: { # required
    changes: [change],
  },
})

end

Allowing outside access to Home Kubernetes Cluster

After I created a home kubernetes cluster, I immediately wanted to allow external access to pods/services/ingresses hosted inside the cluster. One must be aware that in bare metal environments, there is no receiver of an api call to create a load balancer. Since there is not a scriptable environment available to kubernetes, kubernetes cannot request external IP addresses or provision resources that one has come to expect in cloud environments such as AWS. This is a huge bummer – especially since dynamically built environments are fun to have.

To route traffic to web services inside of kubernetes, you have to options available: ingress and service. Services can be exposed via NodePort, LoadBalancer, or ClusterIP. In bare metal, LoadBalancer would never work (unless you coded your own API call to configure a load balancer outside of kubernetes). ClusterIP might work if you want to manage a routing table somewhere inside your network, and NodePort will work if you want to manage a port forwarding table on your router. None of these options are fun for home labs on bare metal. An Ingress is like a layer 7 firewall in that it reads the hostname and path of the incoming HTTP request and can route to applicable services. This works great for a dynamic environment where I am going to host multiple http endpoints.

The overall view of this traffic is going to be: Internet > Router > k8s Ingress > k8s Service > Pod(s).

To create an ingress in kubernetes, you have to make it a Service. In cloud environments, the Ingress is created as type LoadBalancer in home labs, we create this as type NodePort and port forward on the router to any node in the kubernetes cluster.

$ kubectl get svc -n ingress-nginx
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
default-http-backend   ClusterIP   10.102.173.184           80/TCP                       3d
ingress-nginx          NodePort    10.110.162.247           80:30746/TCP,443:32641/TCP   3d

In my home lab, I am port forwarding on my router 0.0.0.0:80 -> <any_node>:30746 and 0.0.0.0:443 -> <any_node>:32641.

Since I have a non-traditional home router (a Linux server with two nics), I had to either enter these into iptables or I could improve upon that by setting up a load balancer such as nginx. nginx will allow me to port-forward load balance across all my nodes and have an easy config file to edit. Because I also want to use cert-manager with Let’s Encrypt free SSLs, I chose to use the TCP stream server of nginx.

Another hiccup (so to speak) in home based labs, is that ISPs give DHCP addresses. So when my internet IP changes, I need to update the DNS of all my http endpoints. Rather than doing that, I have all my home urls (*.wplr.rocks) CNAME to a single hostname which I update with a script with the correct IP.

Home Kubernetes cluster

So I admit it – I am completely obsessed with Kubernetes. All of the web app/api deployment challenges in the past 20 years have been somewhat solved with the Kubernetes orchestration and scheduling platform. Kubernetes brings fault-tolerance, and Highly available systems if set up correctly (i.e. use a Kubernetes cloud installer). I enjoy having the power to write yaml and apply it to infrastructure and it eventually becomes what I told it to be. No longer do I need to write the scripts to do it – it does it for me 🙂

In my first kubernetes home cluster, I re-used my home router and my old media center as a single node + master, but I was hit by a 2 year old Kubernetes bug. It appears my old PC was out in the open. Since my 2 year old son likes to press power buttons, he came over and constantly pressed the power button on my Kubernetes master. This caused me to find a small mini computer that I could place in my cabinet out of view. I finally settled on this as my computer of choice. At $150/each for 4 cores, 4GB RAM, and 32GB nve, I thought it was a good deal and ample to run a few containers per node with a nfs-backed storage array.

These little machines booted via UEFI pxe (after pressing DEL for bios and selecting the Realtek boot in the save/exit page). I used this kickstart file which installed CentOS 7 minimal by passing in the ks argument to a CentOS initramfs:

After the servers installed, I ran this script:

Note: for one of the three machines I received, I had to put modprobe br_netfilter in /etc/rc.local before /etc/sysctl.d/k8s.conf would apply.

Why Jesus and Easter Matters

There is a God and He loves you. John 3:16 says, “For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.”

Everyone is a sinner and our sin separates us from the love that God wants to express toward us  “For all have sinned, and come short of the glory of God;” (Romans 3:23) “But your iniquities have separated between you and your God, and your sins have hid his face from you…” (Isaiah 59:2).

Jesus came to break down the barrier of sin. “But God commendeth his love toward us, in that, while we were yet sinners, Christ died for us” (Romans 5:8).

The death of Jesus satisfied all the requirements, but Jesus did not stay dead. Three days later – Easter morning, He arose from the grave by His own power. “He is not here: for he is risen, as he said.” (Matthew 28:6) “No man taketh it from me, but I lay it down of myself. I have power to lay it down, and I have power to take it again.”(John 10:18)

Having a relationship with God is not about being good or religious. It is claiming the work Jesus fulfilled on the cross.

Jesus said in John 6:47, “He that believeth on me hath everlasting life.”

Those who trust in Jesus are saved from the penalty and power of sin and will have eternal life in Heaven with God.

“For whosoever shall call upon the name of the Lord shall be saved” (Romans 10:13).

Place your trust in Jesus today.

Not posting as much

I have not been posting as much tech stuff on my blog as I want to. The reason for this is I have been mulling about the idea of submitting my tutorials to a publication and get paid for my work. I still have not decided if this is the right course of action. Not that I have to explain myself with the readers of my blog – I just wanted something other than a Christmas post as my newest blog post 😮

Merry Christmas

Merry Christmas from the Wippler family!

We pray that you are enjoying this season of celebration as we reflect upon our Saviour’s birth. Truly, for the Christian, Christ is the focal point of the season.

October was unusually mild and warm for Minnesota. Then on Friday, October 27, a snowstorm swept through our area dumping the highest one-day snowfall total ever recorded in Duluth since October 1933! Quite a rude awakening for this California family. However, we are loving the snow and are adjusting to the colder climate (although we’re told the worst is yet to come). Like today, when the high is going to be -10°F!

On the family side of things, it’s been a joy to watch the Lord work in Mollie’s life this past year–first with her salvation in June and then with her decision to be baptized 3 weeks ago. She has a tender heart, and we’re praying the Lord uses her greatly. On a humorous note, she decided yesterday she wanted to style her hair like “mommy’s.” While Nicole prepared lunch downstairs, Mollie was upstairs adjusting her hairstyle.

Fortunately, not too much damage was done, and the hairdresser informed us that having shorter layers in the front and longer in the back is the “in” look.

Meg turned 4 on November 7, while Clark celebrated his 3rd birthday on December 15! How time flies! Meg continues to astound us with her love of and ability to learn. She’s a thinker for sure: the other day during family devotions, Nicole asked her why Mary washed Jesus feet with her hair, hoping she’d say something along the lines of “because she loved Jesus.” Instead she responded with, “because she didn’t have any towels.”

Now that Jake’s gotten a little older, Clark and Jake have become car/train playing buddies. It’s hilarious to watch them playing/dragging their blankets together around the house. Both boys are becoming more articulate. After Clark fell on his face one day, Nicole teasingly said, “Oh no! Your face has broken into 100 pieces. Let’s get some glue.” With a panic stricken tone, Clark told Mollie, “Get the glue, Mollie! Hurry! My face is broken!” We’re truly blessed with 4 precious children!

Speaking of blessings, it’s been a tremendous blessing to see the Lord working in our church. At our annual Christmas dinner, we had many visitors with several making professions of salvation. The following Sunday morning we had our children’s Christmas program. The auditorium was filled, and several of our bus children had family in attendance. One little girl invited and saw her school mentor in attendance. We’re praying that many of these visitors will be saved, baptized, and then discipled in the upcoming months.

In closing, let us wish your family a Happy New Year! We truly appreciate your prayers as we seek to be faithful to the Lord and His calling upon our lives. You are in our thoughts and prayers as well.

Love,

Andrew and Nicole

kubernetes health check

The day before thanksgiving, I was pondering an issue I was having. I was pinning a package to a specific version in my Docker container and the repository I grabbed it from stopped offering this specific version. This resulted in a container that Jenkins responded as being built correctly, but missing an integral package that allowed my application to function properly. This led me to believe I had to implement Puppet’s Lumogon in my Jenkins build process. Curious if anyone had something like this already developed, I headed over to github.com which eventually led me to compose this tweet:

This readinessProbe communicates with kubeproxy to either allow a pod to be in service or out of service. At first I thought the readinessProbe was a once-and-done check, but I found out later this is not the case. When a pod gets launched, kubernetes waits until the container is in the ready state. We can define what consists of a ready container by the use of probes. Coupled with a kubernetes strategy, we can also define and ensure our application survives broken container updates.

Since the application I am supporting is already HTTP based, making an HTTP check to an endpoint that reports on connectivity to core services was the most trivial to implement. I created a script to verify connectivity to MariaDB, MongoDB, Memcached, Message Queue, and verified certain paths on the NFS share were present. All of these items are important to my application and most of them require certain configuration values in my containers to operate. Having kubernetes run this script every time there is a new pod verifies I will never an experience an outage due to a missing package again. As I mentioned before, I thought the readinessProbe was a once-and-done, however, I found that after implementing it, my metrics indicated the script was running every 10 seconds per every replica… this quickly added up!

After some chatting in the #kubernetes-users slack, I was able to get more understanding of the readinessProbe and how it was designed to communicate with kubeproxy so that you could “shut off” a container by taking it out of rotation. This was not the behavior I wanted so it was suggested that I create a state file. This state file is created after the check and, if it is present, it skips all checks. Due to the ephemeral nature of container storage, it can be assumed this file will never exist on a pod where this check has not been performed.

Adding a user to k8s RBAC

In order to add a user to a kubernetes cluster, we will need several things: kubectl, CA.crt and CA.key (found in your head node’s /etc/kubernetes/pki folder), and openssl.

First, create a private key for the new user. In this example, we will name the file employee.key:

openssl genrsa -out employee.key 2048

Next, we will need to create a certificate sign request – employee.csr – using the private key we just created (employee.key in this example). Make sure to specify your username and group in the -subj section (CN is for the username and O for the group).

openssl req -new -key employee.key -out employee.csr -subj "/CN=username/O=developer"

Generate the final certificate employee.crt by approving the certificate sign request, employee.csr, you made earlier. In this example, the certificate will be valid for 90 days.

openssl x509 -req -in employee.csr -CA CA.crt -CAkey CA.key -CAcreateserial -out employee.crt -days 90

Give employee.crt, employee.key, and CA.crt to the new employee and have the employee follow the below steps.

# Set up the cluster
$ kubectl config set-cluster k8s.domain.tld --server https://api.k8s.domain.tld --certificate-authority /path/to/CA.crt --embed-certs=true

# Set up the credentials (a.k.a login information)
$ kubectl config set-credentials <name> --client-certificate=/path/to/cert.crt --client-key=/path/to/cert.key --embed-certs=true

# bind login to server
$ kubectl config set-context k8s.domain.tld --cluster= k8s.domain.tld --user=<name>
# Optional: append `--namespace=<namespace>` to the command to set a default namespace.

Note: You may move the certificates to a safe location since the commands included --embed-certs=true. This saved the certs in base64 format in the kubernetes config.

Sometimes I post to my blog so I remember how to do a particular thing. This is one of those times.

Reusable containers with confd

I recently had the need to populate a file in a docker container based upon whether or not the container is in production or development. I eventually came across confd which let me populate data in files based upon particular environment variables. While confd excels with distributed key value stores, my needs (and infrastructure) is at a much simpler level.

Confd requires a few folders to store a toml (/etc/confd/conf.d/) and a template file (/etc/confd/templates/). When confd runs, it will look at the contents of each toml file in the conf.d directory and process them according to their instructions.

In my repository example, I am wanting a container to say hello to me when it senses a NAME environment variable and print out the current datetime. If no environment variable is set, only the datetime is printed out. To do this, I must create the toml file to look like this:

[template]
src = "echo.tmpl"
dest = "/echo"

This file is instructing confd to generate the echo file, place it in the root (/) and use /etc/confd/templates/echo.tmpl as the contents.

When we are building the container, we must include these configuration files and ensure confd is ran to generate the destination file. My example Dockerfile is doing just that by including all of the files in the container and running the docker-entrypoint script which is basically running confd and the newly generated file.

 andrew@wipplerxps > ~/git_repos/confd $  docker build -t blog-confd .
Sending build context to Docker daemon 57.34 kB
Step 1/9 : FROM centos:7.4.1708
 ---> 5076a7d1a386
Step 2/9 : LABEL maintainer "andrew.wippler@gmail.com"
 ---> Using cache
 ---> d712b31f7449
Step 3/9 : RUN mkdir -p /etc/confd/{conf.d,templates}
 ---> Running in f340bdcdf973
 ---> 1f0faa9b962f
Removing intermediate container f340bdcdf973
Step 4/9 : COPY docker/confd/ /etc/confd/
 ---> fb16dffc63ac
Removing intermediate container 133128cb7fc1
Step 5/9 : ADD https://github.com/kelseyhightower/confd/releases/download/v0.14.0/confd-0.14.0-linux-amd64 /usr/local/bin/confd
Downloading 17.61 MB/17.61 MB
 ---> a62b388274e6
Removing intermediate container 3f9ec343a5ab
Step 6/9 : RUN chmod +x /usr/local/bin/confd
 ---> Running in 1489dd02ea45
 ---> ab99a5fc5f95
Removing intermediate container 1489dd02ea45
Step 7/9 : COPY docker/docker-entrypoint.sh /var/local/
 ---> 16906971c8ef
Removing intermediate container 7a17a8e17e22
Step 8/9 : RUN chmod a+x /var/local/docker-entrypoint.sh
 ---> Running in 1562a6d06432
 ---> f963372159b1
Removing intermediate container 1562a6d06432
Step 9/9 : ENTRYPOINT /var/local/docker-entrypoint.sh
 ---> Running in 1b7e12c38b4c
 ---> f7d260597e0a
Removing intermediate container 1b7e12c38b4c
Successfully built f7d260597e0a
 andrew@wipplerxps > ~/git_repos/confd $  docker run blog-confd
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Backend set to env
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Starting confd
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Backend source(s) set to 
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Target config /echo out of sync
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Target config /echo has been updated
The current time is: Tue Nov 28 20:05:24 UTC 2017
 andrew@wipplerxps > ~/git_repos/confd $  docker run -e NAME="Andrew Wippler" blog-confd
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Backend set to env
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Starting confd
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Backend source(s) set to 
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Target config /echo out of sync
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Target config /echo has been updated
Hello Andrew Wippler
The current time is: Tue Nov 28 20:05:34 UTC 2017
 andrew@wipplerxps > ~/git_repos/confd $

While it is fun to say hello to yourself once in a while, I am using confd to modify an nginx.conf. When I pass in the SSL environment variable, nginx will listen on port 443 with a self signed cert and forward all HTTP traffic to HTTPS. Obviously in production, I want to use a real SSL cert. Using confd allows me to have the same docker container in development and production – the only difference being a configuration change.

Old glory

I was going through my articles I have collected over the years and found this little gem. The author is unknown.


I AM THE FLAG OF THE
UNITED STATES OF AMERICA

I am the flag of the United States of America.
My name is Old Glory.
I fly atop the world’s tallest buildings.
I stand watch in America’s halls of justice.
I fly majestically over institutions of learning.
I stand guard with power in the world.
Look up and see me.

Continue reading Old glory

Settling in

It has been a month and a half since we moved, and we finally are on a set schedule. The girls have started school, all boxes are unpacked, our house in California closed escrow, and we have successfully adjusted to the CST time zone.

It always amazes me how much God has blessed my family. We are all in good health, we are debt free, we have a roof over our head, and we have food to eat. I can attribute this all to God as I have no control over my health, I am horrible when it comes to big financial matters (such as retirement savings, stock trading, etc.), and I am not the best at selecting the best food choices for myself. It is only through God’s providential guidance that I am in the state I am.

I have been able to go fishing twice since arriving. As a result, I caught 2 fish. Neither were big enough to keep, but it was a great experience to enjoy. It seems the best fishing out here is on a lake. As a result, one needs a boat to get to the middle of the lake. This has caused me to research fishing kayaks. It is doubtful I will get one for the 2017 season, but if I save enough money, I can definitely get one for the 2018 season 🙂

Being a bi-vocational Assistant Pastor is rather weird. It is a role I have not experienced for too long, but I am very excited for what God has in store. Since being here in Duluth, I have been able to share the gospel with several people and see one make a profession for Christ. It was neat to see the “lightbulb” appear when he understood that Jesus came to do all the work of salvation for us and that salvation is not dependent upon our good works or how we live. It is very sad that many people are trapped in the religion of Catholicism, Lutheranism, and Atheism that they fail to see who Jesus is, why he came, and how we can experience life through a relationship with Him.

I am very thankful I can have a second job which is remote work and allows me to be in the tech industry. I like working with bleeding edge software such as Docker, Kubernetes, Puppet, and the like. It also allows me to scratch the itch I have with the nerdy side of tech – coding. I have been able to maintain a legacy PHP app while develop some in NodeJS and Go. Mostly I have been sharing cool things about Github, build pipelines, and the philosophy of “Let the robots do it.”

My coffee survival habit has been amplified by the purchase of a Ninja coffee maker last Amazon Prime day. I was not able to use it until we moved here and I am quite satisfied with how it makes iced coffee. It also gave me a recipe which I have been following for my daily iced coffee – it requires half and half with a few ounces of flavored syrup. I think this is the best approach for iced coffee.

Moving Adventures Part 2

(Note: This guest post is from my wife, Nicole. These are the events that happened July 25th, 2017.)

After 6 hours of rest, we awakened rested (somewhat) and ready to resume our journey. Morning was fairly uneventful, and we were back on the road by 9. As we were fueling up, we realized that Clark’s cup (which he went to bed with) was accidentally left somewhere amid the sheets despite going through the room 2-3 times. No worries! Remembered that were had brought 2 extra sippy cups ?

As we merged on the I-76 we spotted what we believe to be our moving van. Too funny! The day before we’d seen the same truck as we left Barstow. At least we know our belongings should arrive by Friday, lol! An hour into our drive, all the kids except Mollie fell asleep.

Thankfully, the next leg of our journey was fairly uneventful, and we were able to stop for lunch in Kearney by 2:45. Other then an icky diaper and ordering the wrong sandwich for Andrew, lunch was quite pleasant. In fact, Jake ate more than Clark or Meg! Thanks to everyone’s prayers, the children were very well behaved as we made our way to the next stop–Des Moine.

For the next 3.5 hours: played with toys, watched cartoons, and slept (praise the Lord!!). Seems whenever we’re close to our destination one of the younger three decides they’ve had it and begins to fuss. Thankfully, Clark perks up with food or cars, and Jake loves snacks and playing with random items (wipes bucket, water bottle, thermos,  favorite afghan etc.). As a reward, hoping to get a little pool time at the hotel. So thankful we’re only driving 5 hours tomorrow and going to get a good 2 hour break at the Mall of America!