I think I will keep LastPass

LastPass was recently hacked and the hacker was able to steal keys from a developer which had access to the company’s backup files. In the backup files, the hacker could use the decryption keys he stole from the developer to create something like:

somesite.com,<plaintext_username>,<hashed_password>

I can assume my data has been leaked. I am a LastPass user; however, I do not think this is good reason for my departure from LastPass.

This is not the first time a hacker was able to get this sort of information from LastPass. I was also a LastPass user during that time and my passwords were never compromised after than hack. The Zero-knowledge password storing employed by LastPass seems to have worked in the past, and I anticipate it being sufficient again.

As a safety precaution, I went ahead and changed my financial passwords, but other than that step, is there anything else that needs to be done? Perhaps I will change my Master password once more. If LastPass gets hacked again, at least all my SHAs will be different.

Incarnation Haiku

God became a man.
He experienced life.
A life without sin.

He, unworthily,
Received punishment for sin.
God the Son had died.

Three days have now passed.
And Jesus did not stay dead
He came back to life.

It was His power.
Without the help of others.
Life came back to Him.

By resurrection,
Jesus says, “Place trust in me.”
“I make all things new.”

This event declares
And even more loudly proclaims
Flawless victory.

Impossible to define

It is impossible to define a complex thought with one sentence. Even the previous sentence demands further information. What kind of thoughts do I mean? What is the reasoning for saying such a thing? Context adds further information to the reasoning behind such a statement. It also sets the stage for the real reason for the statement or provokes further thought.

If one considers the previous paragraph to be accurate in its message, then one should not settle for a single verse to explain a complex doctrine, such as salvation.

Asking ChatGPT to define love

As programmers develop new text generation methods, what can come as a result is interesting. Recently, ChatGPT responses have been propagating to Twitter, with many asking the AI text generator for common paradigms with known human solutions. Some posts have been insightful about how the chatbot was instructed to learn and associate words. However, to those who have programmed AI models, the shortcomings are apparent, and there is no immediate threat of a robot uprising. Currently, many perceived issues are corrected by inputting the correct wording to the chatbot, indicating the idiom, garbage in – garbage out, is still valid.

I was interested in asking the ChatGPT to clarify the love quality found in John 3:16.

Love is an emotion that is, in Christian circles, attributed to different levels or types of love. One can have a love for a sports team, a love for a spouse, and a love for chocolate. However, these types of love vary based on each object and the love holder.

In English, we would use the same word for love to describe the varying emotion, but in other languages, the varying degrees are expressed in different word choices. It is common for Bible preachers to equate Greek word choices with varying forms of love, such as in the dialog between Peter and Jesus in John 21. (Although Carson has made good dialog into overthrowing this perceived relation.1)

My recent study of the Greek text in John 3:16 revealed a language barrier between what is written in Greek and what is translated into English. When written, love was given a form without English verb equivalency. It is translated in the English past tense, which offers a sense of something performed in previous times, but does not apply to today. However, as John wrote in Greek, the phrasal action exists in an eternal state.

Perhaps ChatGPT could describe this immeasurable amount of love:

ChatGPT trying to express love by stating: Ultimately, the best way to express an immeasurable amount of love is to show it through your actions, and to make the other person feel loved and valued.

The response is what you would expect of popular psychology, being there for a person regardless of the circumstance. The fallacy of this thought is easily identified in a situation where the one you love has escaped from prison. In that instance, supporting them would require you to harbor a known fugitive – an act punishable by US law. It is also impossible to be supportive of everyone’s actions at all times. Can immeasurable love be shown to groups that have contrary opinions? However, this type of love is expressed in John 3:16.

God’s love is an immeasurable amount of love, which has lasting effects that continually build up. The potency by which God shows His love is presented through its conceptualization and increasing exponential power, which began in eternity past. In essence, God’s love will continually build upon the foundation until the recipient has no more room to receive it. At this point, God’s love is exponentially more.

The very nature of God’s love felt for the world is signified by this thought: God expressed it in giving the most prized possession He had – His only begotten Son. The power and intensity by which God loves are based on the virtue of Jesus Christ.

God so loved; therefore, He gave.

1 D. A. Carson, Exegetical Fallacies, 2nd ed. (Grand Rapids, MI: Baker Books, 2013). 51-53.

Back to the blogs

The painfully slow death of social media is pushing the masses back into our blog-o-spheres. In 2 years, the only people doing social media will be those who are not technical enough to click a one-button wordpress/ghost install and pay $5/month from a hosting provider.
Of course, there will be the faithful few who still insist that Markdown is the way to go.

For now, I want to experiment and see how a “status” post format works.

ReMarkable 2 Review

At first glance, the reMarkable seemed expensive for a one show pony. Replacing a pad of paper for notes was its only selling point and truly main focus. Could it really fit my use case and deliver an experience I needed?

As one who regularly types notes and prints them for use in public speaking, I sought out a replacement for physical paper. I had multiple copies of notes, printed quite regularly, and often would get copies confused. In my flow, I numbered the pages as to not get them confused which order they were to be referred. I reprinted each time a correction to the notes or if the paper withered due to use. This seemed like a huge waste of paper. (At $.05 estimated per sheet of paper, 10 pages of notes would cost approximately $2 by the time I was finished.)

I first started using my Kindle Paperwhite as a replacement to printing. I could easily send myself a PDF and have it appear on my Kindle. Over time I realized the screen was perfect for the lighting environments where I routinely looked at my notes – a pulpit. It was like having my notes printed, but without the glare of an LED from a tablet. One flaw of the Kindle stood out: while it was the perfect size for reading a book, it was too small for reading full sized PDF documents at a distance. I learned that I had to type “convert” in the subject line of each PDF email sent to my Kindle. This allowed better font viewing, but my formatting would not appear the way I needed it to.

When I first saw the reMarkable, it looked like it would alleviate my two concerns. Once I received my remarkable (thank you, stimulus money), I was able to import a few PDFs and view them as full-sized on the glare-free e-ink display. The formatting was also exactly as I set it in my word processor. Everything seemed as if it was going to be perfect; however, the reMarkable had one flaw – it does not have a typing program. Yes, it does have an on-screen keyboard for filling out items such as filenames, emails, and others, but if you want to word process, you have to manually write out the letters. While this is good for hand-written note takers, this threw a wrench in my workflow. I could not use the reMarkable as an all-inclusive document processor to where I would no longer need to type on a computer. (To reMarkable’s credit, they did not market the device as such, but I was erroneously thinking there would be some sort of typing application included.)

After some reviewing my workflow of typing my document in GSuite, saving it as PDF, and uploading it to the reMarkable, I figured this required too much effort to get something on the reMarkable. There had to be an easier way – considering I would update my typed document quite frequently. Looking for options, I was able to find the google-drive-sync script and extend it to include Documents and convert them to PDFs. My workflow now is as follows:

I. I type a document into GSuite
2. It syncs to remarkable cloud and my device
3. I copy the file to a local folder, make annotations in the process of reviewing
4. I update the file on GSuite
5. Repeat from step 2

I still have a few flaws to work out with the sync, but it works for now.

As for the other aspect of the reMarkable – note taking and eBook reading, I have to say, that I quite enjoy the experience. Jotting down a to-do list, quickly putting thoughts down (instead of waiting for a note app to boot on a phone or computer), and notating on top of eBooks have been a breeze. The experience is just like using paper, and for one who can detect lag from a stylus, I cannot tell that I am using one.

For a device that is marketed as replacing paper notes, it does quite well. My use case is also satisfied with some modifications that I have made. Overall, I am satisfied in the purchase and 2 year break-even point.

Use the same Dockerfile – please

As Containers have progressed, Docker has stood out as the defacto standard. As many of the laggards are coming up to speed, Dockerfiles can be seen in many open source repositories. With the addition to that, I have seen a few repos with a Dockerfile-prod, Dockerfile-dev, Dockerfile-test, etc.

Additionally, you find an IF clause in the CMD statement such as:

CMD if [ "$REACT_NODE_ENV" = "development" ]; \
  then yarn dev;  \
  else yarn build && yarn start --only=production; \
  fi

To those repositories, I have one daunting question:

WHY?

Container start commands can be overwritten at run time. Here is how to do it:

# Dockerfile
FROM alpine:3.6

CMD echo "production start command"
# docker-compose.yml
version: '3'
services:
  dev-server:
    build: .
    command: echo 'development start command'
# kubernetes-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cool-app
  namespace: cool-app-testing
  labels:
    app: cool-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cool-app
  template:
    metadata:
      labels:
        app: cool-app
    spec:
      containers:
      - name: cool-app
        image: alpine:3.6
        command:
        - echo
        - 'testing start command'

WordPress Containerization Boilerplate

As a step further to my previous post, I have created a boilerplate for future WordPress projects. It can be accessed at https://github.com/andrewwippler/WordPress-Containerization-Boilerplate.

To quickly start a WordPress environment, simply run the following commands:

git clone git@github.com:andrewwippler/WordPress-Containerization-Boilerplate.git
cd WordPress-Containerization-Boilerplate/
docker-compose up

and by visiting http://localhost:8080

More instructions are in the repo README.

Happy Plugin/Theme development.

Docker-izing WordPress for Kubernetes

WordPress is amazingly popular considering how antiquated the file structure and code appears to be. Even still, it is the easiest CMS that I have used and the community has created plugins to make the copy-folder-for-a-new-theme/plugin at least tolerable. A challenge comes when one wants to use the 1990’s method of serving web applications in a more modern way (such as running inside a container on top of Kubernetes). Containers are meant to be immutable and treated as read-only. (No change to files in the container after they are built.) Containers are supposed to be a point-in-time release of software. As such, I can roll-back to a specific container version and have that specific code running. This causes a problem when one wants to use a file-dependent application such as WordPress. The best I could come up with for running WordPress in a container is a forward-only method of deploying code (basically, giving up the ability to use a previous version of code.) There is a way to keep that functionality, but it would mean storing everything (including uploads) inside an ever-growing container or using a central object store such as S3 for uploads. It would also require a re-build of the container every time a plugin is updated – which would presumably be every hour. My deployments of WordPress are so little that I can hardly justify using S3 for uploads, keeping the plugins in sync, and going backwards in time. When deploying to Kubernetes, one can scale the replicas to N copies. Keeping plugins, themes, and updates the same across all replicas will require a READ WRITE MANY (rwx) volume to be shared. This could be a GlusterFS volume or NFS, but it cannot be a AWS EBS volume or any other single-use block storage. When looking at the available WordPress images, there are three that seem interesting. With the official image, I like that I can use php-fpm and alpine. The next top two implementations of WordPress have very bloated docker files. I have come to the conclusion that my WordPress container will have to be built from scratch.
The Dockerfile is very similar to the official WordPress container. It uses php:7.2-fpm-alpine as the base image, adds in nginx, and inserts a generic wp-config.php file.
The folder structure for the container is as follows:
WordPress Container Folder
├── docker-entrypoint.sh
├── Dockerfile
├── html
│   └── ... Contents of wordpress-X.Y.Z.zip
├── nginx.conf
└── wp-config.php
It can be built by running a command similar to docker build -t andrewwippler/wordpress:latest .
nginx.conf is a very basic configuration file with gzip and cache headers. The real neat things come in the docker-entrypoint.sh file.
I borrowed the database creation script; however, since PHP was already installed in the container, I ran a few more checks in PHP rather than bash. For instance, the container places the local code in /var/www/html-original and rsyncs it to /var/www/html where the webserver sees it, but it only does this if the code in html-original is newer than html. This allows an operator to mount a storage volume at /var/www/html which can be shared across Kubernetes Deployment replicas. The code for this is:
// see if we need to copy files over
include '/var/www/html-original/wp-includes/version.php';
$dockerWPversion = $wp_version;

if (file_exists('/var/www/html/wp-includes/version.php')) {
    include '/var/www/html/wp-includes/version.php';
    $installedWPversion = $wp_version;
} else {
    $installedWPversion = '0.0.0';
}

fwrite($stderr, "dockerWPversion: $dockerWPversion - installedWPversion: $installedWPversion\n");
if(version_compare($dockerWPversion, $installedWPversion, '>')) {
    fwrite($stderr, "Installing wordpress files\n");
    exec('rsync -au /var/www/html-original/ /var/www/html');
}
I have also included a theme-only check that will update the theme if it has changed. This is necessary to update the theme files when the version of WordPress has not changed.
if (filemtime('/var/www/html-original/wp-content/themes') > filemtime('/var/www/html/wp-content/themes')) {
    fwrite($stderr, "Updating theme files\n");
    exec('rsync -au --delete-after /var/www/html-original/wp-content/themes/ /var/www/html/wp-content/themes');
}
All files I have referenced in this article are located in a gist. In addition to those files a local docker-compose.yml file might be helpful for your local development:
version: '2'
services:
  db:
    image: mariadb:10
    volumes:
      - ./tmp/db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=secretPASS

  wordpress:
    build: wordpress
    volumes:
      - ./html:/var/www/html
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    links:
      - db
    environment:
      - WORDPRESS_DB_NAME=wordpress
      - WORDPRESS_DB_HOST=db
      - WORDPRESS_DB_USER=root
      - WORDPRESS_DB_PASSWORD=secretPASS
    ports:
      - 8080:80

Kubernetes: Heapster to Metrics Server

I recently updated my kubernetes cluster from 1.10.2 to 1.11.0. I noticed heapster was being deprecated and completely removed by version 1.13.0. I thought this would be the perfect time to try out metrics-server. I had to download the git repo to apply the kubernetes yaml to my cluster. Since this is sometimes not as ideal as I would like, (I prefer kubectl apply -f http:// when it comes from a trusted source) I am writing the below for easy access in the future:


kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/auth-delegator.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/auth-reader.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/metrics-apiservice.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/metrics-server-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/metrics-server-service.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/resource-reader.yaml

Note: this is for clusters running v1.8.0 or greater.

Jenkins-x on home kubernetes cluster

Jenkins-x appears to be the next big thing in CI/CD workflows – especially if you develop applications on kubernetes. There were a few tweaks I needed to do to set it up:

  1. I had to manually create Persistent Volumes (no big deal, below are what I have for my NFS share)
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins
      namespace: jx
      labels:
        app: jenkins
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-chartmuseum
      namespace: jx
      labels:
        app: jenkins-x-chartmuseum
    spec:
      capacity:
        storage: 100Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-chartmuseum"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-docker-registry
      namespace: jx
      labels:
        app: jenkins-x-docker-registry
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-docker-registry"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-mongodb
      namespace: jx
      labels:
        app: jenkins-x-mongodb
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-mongodb"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-nexus
      namespace: jx
      labels:
        app: jenkins-x-nexus
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-nexus"
  2. I had to modify the install line
    jx install --ingress-namespace ingress-nginx --domain wplr.rocks --tls-acme true --skip-ingress
  3. I had to modify the jenkins-x-mongodb deployment to use image mongo:3.6.5-jessie. Still wonder why people use bitnami images.
  4. I had to
    securityContext:
      runAsUser: 1024

    on the jenkins-x-nexus deployment. The container was trying to change permissions on my nfs mount. Not sure why my Synology NFS does not like permission changes.

 

Even after those changes, jenkins-x-monocular-ui still fails to start -_- … I have run out of time for now. More debugging to come later (MUCH MUCH later)

DHCP IP updater

This is the script I use to change the DNS record of my home IP when it changes. I have it running once a week and have not noticed a lapse in coverage. If your ISP has DHCP configured correctly, you will receive the same IP address when you are due for a renew. Otherwise you need a script like the one below.

#!/usr/bin/ruby

require 'aws-sdk'
require 'socket'

def my_first_public_ipv4
  Socket.ip_address_list.detect{|intf| intf.ipv4? and !intf.ipv4_loopback? and !intf.ipv4_multicast? and !intf.ipv4_private?}
end

ip = my_first_public_ipv4.ip_address

unless ip.nil?

change = {
  :action => 'UPSERT',
  :resource_record_set => {
    :name => "home.andrewwippler.com",
    :type => "A",
    :ttl => 600,
    :resource_records => [{:value => ip}]
}}

route53 = Aws::Route53::Client.new(
    region: 'us-east-1'
)
route53.change_resource_record_sets({
  hosted_zone_id: '/hostedzone/XXXXXXXXXXXXXXX', # required
  change_batch: { # required
    changes: [change],
  },
})

end

Allowing outside access to Home Kubernetes Cluster

After I created a home kubernetes cluster, I immediately wanted to allow external access to pods/services/ingresses hosted inside the cluster. One must be aware that in bare metal environments, there is no receiver of an api call to create a load balancer. Since there is not a scriptable environment available to kubernetes, kubernetes cannot request external IP addresses or provision resources that one has come to expect in cloud environments such as AWS. This is a huge bummer – especially since dynamically built environments are fun to have.

To route traffic to web services inside of kubernetes, you have to options available: ingress and service. Services can be exposed via NodePort, LoadBalancer, or ClusterIP. In bare metal, LoadBalancer would never work (unless you coded your own API call to configure a load balancer outside of kubernetes). ClusterIP might work if you want to manage a routing table somewhere inside your network, and NodePort will work if you want to manage a port forwarding table on your router. None of these options are fun for home labs on bare metal. An Ingress is like a layer 7 firewall in that it reads the hostname and path of the incoming HTTP request and can route to applicable services. This works great for a dynamic environment where I am going to host multiple http endpoints.

The overall view of this traffic is going to be: Internet > Router > k8s Ingress > k8s Service > Pod(s).

To create an ingress in kubernetes, you have to make it a Service. In cloud environments, the Ingress is created as type LoadBalancer in home labs, we create this as type NodePort and port forward on the router to any node in the kubernetes cluster.

$ kubectl get svc -n ingress-nginx
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
default-http-backend   ClusterIP   10.102.173.184           80/TCP                       3d
ingress-nginx          NodePort    10.110.162.247           80:30746/TCP,443:32641/TCP   3d

In my home lab, I am port forwarding on my router 0.0.0.0:80 -> <any_node>:30746 and 0.0.0.0:443 -> <any_node>:32641.

Since I have a non-traditional home router (a Linux server with two nics), I had to either enter these into iptables or I could improve upon that by setting up a load balancer such as nginx. nginx will allow me to port-forward load balance across all my nodes and have an easy config file to edit. Because I also want to use cert-manager with Let’s Encrypt free SSLs, I chose to use the TCP stream server of nginx.

Another hiccup (so to speak) in home based labs, is that ISPs give DHCP addresses. So when my internet IP changes, I need to update the DNS of all my http endpoints. Rather than doing that, I have all my home urls (*.wplr.rocks) CNAME to a single hostname which I update with a script with the correct IP.

Home Kubernetes cluster

So I admit it – I am completely obsessed with Kubernetes. All of the web app/api deployment challenges in the past 20 years have been somewhat solved with the Kubernetes orchestration and scheduling platform. Kubernetes brings fault-tolerance, and Highly available systems if set up correctly (i.e. use a Kubernetes cloud installer). I enjoy having the power to write yaml and apply it to infrastructure and it eventually becomes what I told it to be. No longer do I need to write the scripts to do it – it does it for me 🙂

In my first kubernetes home cluster, I re-used my home router and my old media center as a single node + master, but I was hit by a 2 year old Kubernetes bug. It appears my old PC was out in the open. Since my 2 year old son likes to press power buttons, he came over and constantly pressed the power button on my Kubernetes master. This caused me to find a small mini computer that I could place in my cabinet out of view. I finally settled on this as my computer of choice. At $150/each for 4 cores, 4GB RAM, and 32GB nve, I thought it was a good deal and ample to run a few containers per node with a nfs-backed storage array.

These little machines booted via UEFI pxe (after pressing DEL for bios and selecting the Realtek boot in the save/exit page). I used this kickstart file which installed CentOS 7 minimal by passing in the ks argument to a CentOS initramfs:

After the servers installed, I ran this script:

Note: for one of the three machines I received, I had to put modprobe br_netfilter in /etc/rc.local before /etc/sysctl.d/k8s.conf would apply.

Why Jesus and Easter Matters

There is a God and He loves you. John 3:16 says, “For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.”

Everyone is a sinner and our sin separates us from the love that God wants to express toward us  “For all have sinned, and come short of the glory of God;” (Romans 3:23) “But your iniquities have separated between you and your God, and your sins have hid his face from you…” (Isaiah 59:2).

Jesus came to break down the barrier of sin. “But God commendeth his love toward us, in that, while we were yet sinners, Christ died for us” (Romans 5:8).

The death of Jesus satisfied all the requirements, but Jesus did not stay dead. Three days later – Easter morning, He arose from the grave by His own power. “He is not here: for he is risen, as he said.” (Matthew 28:6) “No man taketh it from me, but I lay it down of myself. I have power to lay it down, and I have power to take it again.”(John 10:18)

Having a relationship with God is not about being good or religious. It is claiming the work Jesus fulfilled on the cross.

Jesus said in John 6:47, “He that believeth on me hath everlasting life.”

Those who trust in Jesus are saved from the penalty and power of sin and will have eternal life in Heaven with God.

“For whosoever shall call upon the name of the Lord shall be saved” (Romans 10:13).

Place your trust in Jesus today.