Tag Archives: hosting

The painfully slow death of social media is pushing the masses back into our blog-o-spheres. In 2 years, the only people doing social media will be those who are not technical enough to click a one-button wordpress/ghost install and pay $5/month from a hosting provider.
Of course, there will be the faithful few who still insist that Markdown is the way to go.

For now, I want to experiment and see how a “status” post format works.

WordPress Containerization Boilerplate

As a step further to my previous post, I have created a boilerplate for future WordPress projects. It can be accessed at https://github.com/andrewwippler/WordPress-Containerization-Boilerplate.

To quickly start a WordPress environment, simply run the following commands:

git clone git@github.com:andrewwippler/WordPress-Containerization-Boilerplate.git
cd WordPress-Containerization-Boilerplate/
docker-compose up

and by visiting http://localhost:8080

More instructions are in the repo README.

Happy Plugin/Theme development.

Docker-izing WordPress for Kubernetes

WordPress is amazingly popular considering how antiquated the file structure and code appears to be. Even still, it is the easiest CMS that I have used and the community has created plugins to make the copy-folder-for-a-new-theme/plugin at least tolerable. A challenge comes when one wants to use the 1990’s method of serving web applications in a more modern way (such as running inside a container on top of Kubernetes). Containers are meant to be immutable and treated as read-only. (No change to files in the container after they are built.) Containers are supposed to be a point-in-time release of software. As such, I can roll-back to a specific container version and have that specific code running. This causes a problem when one wants to use a file-dependent application such as WordPress. The best I could come up with for running WordPress in a container is a forward-only method of deploying code (basically, giving up the ability to use a previous version of code.) There is a way to keep that functionality, but it would mean storing everything (including uploads) inside an ever-growing container or using a central object store such as S3 for uploads. It would also require a re-build of the container every time a plugin is updated – which would presumably be every hour. My deployments of WordPress are so little that I can hardly justify using S3 for uploads, keeping the plugins in sync, and going backwards in time. When deploying to Kubernetes, one can scale the replicas to N copies. Keeping plugins, themes, and updates the same across all replicas will require a READ WRITE MANY (rwx) volume to be shared. This could be a GlusterFS volume or NFS, but it cannot be a AWS EBS volume or any other single-use block storage. When looking at the available WordPress images, there are three that seem interesting. With the official image, I like that I can use php-fpm and alpine. The next top two implementations of WordPress have very bloated docker files. I have come to the conclusion that my WordPress container will have to be built from scratch.
The Dockerfile is very similar to the official WordPress container. It uses php:7.2-fpm-alpine as the base image, adds in nginx, and inserts a generic wp-config.php file.
The folder structure for the container is as follows:
WordPress Container Folder
├── docker-entrypoint.sh
├── Dockerfile
├── html
│   └── ... Contents of wordpress-X.Y.Z.zip
├── nginx.conf
└── wp-config.php
It can be built by running a command similar to docker build -t andrewwippler/wordpress:latest .
nginx.conf is a very basic configuration file with gzip and cache headers. The real neat things come in the docker-entrypoint.sh file.
I borrowed the database creation script; however, since PHP was already installed in the container, I ran a few more checks in PHP rather than bash. For instance, the container places the local code in /var/www/html-original and rsyncs it to /var/www/html where the webserver sees it, but it only does this if the code in html-original is newer than html. This allows an operator to mount a storage volume at /var/www/html which can be shared across Kubernetes Deployment replicas. The code for this is:
// see if we need to copy files over
include '/var/www/html-original/wp-includes/version.php';
$dockerWPversion = $wp_version;

if (file_exists('/var/www/html/wp-includes/version.php')) {
    include '/var/www/html/wp-includes/version.php';
    $installedWPversion = $wp_version;
} else {
    $installedWPversion = '0.0.0';
}

fwrite($stderr, "dockerWPversion: $dockerWPversion - installedWPversion: $installedWPversion\n");
if(version_compare($dockerWPversion, $installedWPversion, '>')) {
    fwrite($stderr, "Installing wordpress files\n");
    exec('rsync -au /var/www/html-original/ /var/www/html');
}
I have also included a theme-only check that will update the theme if it has changed. This is necessary to update the theme files when the version of WordPress has not changed.
if (filemtime('/var/www/html-original/wp-content/themes') > filemtime('/var/www/html/wp-content/themes')) {
    fwrite($stderr, "Updating theme files\n");
    exec('rsync -au --delete-after /var/www/html-original/wp-content/themes/ /var/www/html/wp-content/themes');
}
All files I have referenced in this article are located in a gist. In addition to those files a local docker-compose.yml file might be helpful for your local development:
version: '2'
services:
  db:
    image: mariadb:10
    volumes:
      - ./tmp/db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=secretPASS

  wordpress:
    build: wordpress
    volumes:
      - ./html:/var/www/html
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    links:
      - db
    environment:
      - WORDPRESS_DB_NAME=wordpress
      - WORDPRESS_DB_HOST=db
      - WORDPRESS_DB_USER=root
      - WORDPRESS_DB_PASSWORD=secretPASS
    ports:
      - 8080:80

Jenkins-x on home kubernetes cluster

Jenkins-x appears to be the next big thing in CI/CD workflows – especially if you develop applications on kubernetes. There were a few tweaks I needed to do to set it up:

  1. I had to manually create Persistent Volumes (no big deal, below are what I have for my NFS share)
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins
      namespace: jx
      labels:
        app: jenkins
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-chartmuseum
      namespace: jx
      labels:
        app: jenkins-x-chartmuseum
    spec:
      capacity:
        storage: 100Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-chartmuseum"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-docker-registry
      namespace: jx
      labels:
        app: jenkins-x-docker-registry
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-docker-registry"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-mongodb
      namespace: jx
      labels:
        app: jenkins-x-mongodb
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-mongodb"
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: jenkins-x-nexus
      namespace: jx
      labels:
        app: jenkins-x-nexus
    spec:
      capacity:
        storage: 30Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.0.101
        path: "/volume1/k8s/jenkins/jenkins-x-nexus"
  2. I had to modify the install line
    jx install --ingress-namespace ingress-nginx --domain wplr.rocks --tls-acme true --skip-ingress
  3. I had to modify the jenkins-x-mongodb deployment to use image mongo:3.6.5-jessie. Still wonder why people use bitnami images.
  4. I had to
    securityContext:
      runAsUser: 1024

    on the jenkins-x-nexus deployment. The container was trying to change permissions on my nfs mount. Not sure why my Synology NFS does not like permission changes.

 

Even after those changes, jenkins-x-monocular-ui still fails to start -_- … I have run out of time for now. More debugging to come later (MUCH MUCH later)

DHCP IP updater

This is the script I use to change the DNS record of my home IP when it changes. I have it running once a week and have not noticed a lapse in coverage. If your ISP has DHCP configured correctly, you will receive the same IP address when you are due for a renew. Otherwise you need a script like the one below.

#!/usr/bin/ruby

require 'aws-sdk'
require 'socket'

def my_first_public_ipv4
  Socket.ip_address_list.detect{|intf| intf.ipv4? and !intf.ipv4_loopback? and !intf.ipv4_multicast? and !intf.ipv4_private?}
end

ip = my_first_public_ipv4.ip_address

unless ip.nil?

change = {
  :action => 'UPSERT',
  :resource_record_set => {
    :name => "home.andrewwippler.com",
    :type => "A",
    :ttl => 600,
    :resource_records => [{:value => ip}]
}}

route53 = Aws::Route53::Client.new(
    region: 'us-east-1'
)
route53.change_resource_record_sets({
  hosted_zone_id: '/hostedzone/XXXXXXXXXXXXXXX', # required
  change_batch: { # required
    changes: [change],
  },
})

end

Allowing outside access to Home Kubernetes Cluster

After I created a home kubernetes cluster, I immediately wanted to allow external access to pods/services/ingresses hosted inside the cluster. One must be aware that in bare metal environments, there is no receiver of an api call to create a load balancer. Since there is not a scriptable environment available to kubernetes, kubernetes cannot request external IP addresses or provision resources that one has come to expect in cloud environments such as AWS. This is a huge bummer – especially since dynamically built environments are fun to have.

To route traffic to web services inside of kubernetes, you have to options available: ingress and service. Services can be exposed via NodePort, LoadBalancer, or ClusterIP. In bare metal, LoadBalancer would never work (unless you coded your own API call to configure a load balancer outside of kubernetes). ClusterIP might work if you want to manage a routing table somewhere inside your network, and NodePort will work if you want to manage a port forwarding table on your router. None of these options are fun for home labs on bare metal. An Ingress is like a layer 7 firewall in that it reads the hostname and path of the incoming HTTP request and can route to applicable services. This works great for a dynamic environment where I am going to host multiple http endpoints.

The overall view of this traffic is going to be: Internet > Router > k8s Ingress > k8s Service > Pod(s).

To create an ingress in kubernetes, you have to make it a Service. In cloud environments, the Ingress is created as type LoadBalancer in home labs, we create this as type NodePort and port forward on the router to any node in the kubernetes cluster.

$ kubectl get svc -n ingress-nginx
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
default-http-backend   ClusterIP   10.102.173.184           80/TCP                       3d
ingress-nginx          NodePort    10.110.162.247           80:30746/TCP,443:32641/TCP   3d

In my home lab, I am port forwarding on my router 0.0.0.0:80 -> <any_node>:30746 and 0.0.0.0:443 -> <any_node>:32641.

Since I have a non-traditional home router (a Linux server with two nics), I had to either enter these into iptables or I could improve upon that by setting up a load balancer such as nginx. nginx will allow me to port-forward load balance across all my nodes and have an easy config file to edit. Because I also want to use cert-manager with Let’s Encrypt free SSLs, I chose to use the TCP stream server of nginx.

Another hiccup (so to speak) in home based labs, is that ISPs give DHCP addresses. So when my internet IP changes, I need to update the DNS of all my http endpoints. Rather than doing that, I have all my home urls (*.wplr.rocks) CNAME to a single hostname which I update with a script with the correct IP.

kubernetes health check

The day before thanksgiving, I was pondering an issue I was having. I was pinning a package to a specific version in my Docker container and the repository I grabbed it from stopped offering this specific version. This resulted in a container that Jenkins responded as being built correctly, but missing an integral package that allowed my application to function properly. This led me to believe I had to implement Puppet’s Lumogon in my Jenkins build process. Curious if anyone had something like this already developed, I headed over to github.com which eventually led me to compose this tweet:

This readinessProbe communicates with kubeproxy to either allow a pod to be in service or out of service. At first I thought the readinessProbe was a once-and-done check, but I found out later this is not the case. When a pod gets launched, kubernetes waits until the container is in the ready state. We can define what consists of a ready container by the use of probes. Coupled with a kubernetes strategy, we can also define and ensure our application survives broken container updates.

Since the application I am supporting is already HTTP based, making an HTTP check to an endpoint that reports on connectivity to core services was the most trivial to implement. I created a script to verify connectivity to MariaDB, MongoDB, Memcached, Message Queue, and verified certain paths on the NFS share were present. All of these items are important to my application and most of them require certain configuration values in my containers to operate. Having kubernetes run this script every time there is a new pod verifies I will never an experience an outage due to a missing package again. As I mentioned before, I thought the readinessProbe was a once-and-done, however, I found that after implementing it, my metrics indicated the script was running every 10 seconds per every replica… this quickly added up!

After some chatting in the #kubernetes-users slack, I was able to get more understanding of the readinessProbe and how it was designed to communicate with kubeproxy so that you could “shut off” a container by taking it out of rotation. This was not the behavior I wanted so it was suggested that I create a state file. This state file is created after the check and, if it is present, it skips all checks. Due to the ephemeral nature of container storage, it can be assumed this file will never exist on a pod where this check has not been performed.

Jumping the ship on Evernote

I am a long time user of Evernote. Currently it has the best browser extensions, a wide range of supported operating systems, and it has a free tier; however, I am getting frustrated with it. In the past year, they have changed plans twice – now the free tier is only supported on 2 platforms. This has cost me to re-evaluate my use of Evernote. Lately all I have been using Evernote for is to sync a grocery list between devices and keeping my children’s memories in one location – their sayings, artwork, etc. In the past I also used it for note taking, article saving, and inputting ideas. I have also seriously considered buying a subscription just so I can continue uninterrupted.

While this may be a rant about a free user using a free service, I contribute to the monitization of their service by the viewing of advertisements. The free tier limits (except for maximum devices) are adequate for my occasional use and probably have cost Evernote around $3 total in the past several years. The valuation Evernote has placed on their second-level tier ($35/year) is much higher than I value it (~$12/year). While I may not be able to set the price on what Evernote costs, I can put a price on what I am willing to pay for a simple note service.

A recent article on opensource.com opened my eyes to looking at note taking alternatives. I was surprised at how mature Paperwork was; however, it contained one simple flaw that throws my grocery list experience out the window – no checkbox option. This caused me to evaluate Google Keep – yes, has check boxes, but functions more like sticky notes. Then I remembered Atlassian’s confluence has checkboxes. Their paid version is $10 for up to ten users (per year if it self hosted, monthly if in the cloud). This fits my budget, I can create grocery lists, take notes, and create notebooks/spaces. While I have not switched away yet, confluence seems like a viable option as I already have an always-on home server.

Debugging PHP web applications

In 2017, this topic seems a little dated and will probably not get me an opportunity to speak at a conference. While all of the elite programmers, cool kids, and CS grads are talking languages such as Go and Erlang – how to do tracing, performance testing, and the like – it seems very juvenile for me to write about PHP.

PHP is a language made specifically for the web. It is the first web language I learned after HTML 4/CSS. I learned it because it was easy. The syntax was easy, the variables – easy, running it – easy; however, when something broke, it was difficult as a beginner to troubleshoot. I would spend several hours looking at the code I just wrote only to find out I missed a semicolon or quote. This post is several things I wish I knew when I started with PHP. Continue reading Debugging PHP web applications

OpenStack certification

On Dec 20th, I am scheduled to take my COA exam. From the exam requirements page, it appears to be a somewhat moderately difficult exam. The few points I need work on are heat templates and swift object administration. A few things I know about the exam are what are publicly available via YouTube videos of the OpenStack summit sessions.

One of my troubles of studying for exams is creating content to test myself on the objectives of the exam. I look at the requirements and say to myself, “I know that,” and nothing gets written for that aspect. One thing I have done in the past is to search Github for exam prep questions. One I have found for OpenStack is AJNOURI/COA. He also made a nifty website for his test prep questions.

A key aspect that has helped me pass all of my open book exams is to recall the locations of my troubled areas. Looking at the docs/reading the manual has often come a best practice of mine. Most of the time, exam questions are covered in the docs as the exams expect you to have read them.

Signs you are doing IT wrong

  1. You still use FTP
  2. You use SFTP
  3. You have a single server hosting 1 website, MySQL, and PHP. It has 4+ GB of RAM and you only have ~2,000 visitors a day.
  4. You login via root
  5. You don’t use version control
  6. You use a control panel for servers which you have SSH access.
  7. It takes you over an hour to migrate 1 website
  8. Your DNS TTL records are over 10 minutes
  9. Your SQL server is not accessible over SSL/TLS
  10. You use mod_php instead of reverse proxying to php-fpm
  11. You develop for the web on Windows
  12. You chmod 777
  13. You use modules/plugins that require chmod 777
  14. You have no backups
  15. You host multiple websites on one server (internal-only websites excluded)
  16. You SSH with passwords
  17. You reuse passwords
  18. You don’t read books
  19. You don’t attend conferences
  20. You attend more than 6 conferences a year
  21. You use skype for communication
  22. You make a separate mobile site
  23. You add more RAM to fix your memory leaks

Cloud computing cost analysis

Having a server in the cloud scared me at first. It wasn’t the fact that being in a multi-tenant environment posed the possibility of others gaining access to my code/files – it was the cost that scared me. Not knowing if I was getting the best deal always plagued my mind. Especially since electricity, a/c, and hardware maintenance were never factored into my budgets it made it hard to justify a server in the cloud when on-premise appeared to be so cheap.

Continue reading Cloud computing cost analysis