Why Jesus and Easter Matters

There is a God and He loves you. John 3:16 says, “For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.”

Everyone is a sinner and our sin separates us from the love that God wants to express toward us  “For all have sinned, and come short of the glory of God;” (Romans 3:23) “But your iniquities have separated between you and your God, and your sins have hid his face from you…” (Isaiah 59:2).

Jesus came to break down the barrier of sin. “But God commendeth his love toward us, in that, while we were yet sinners, Christ died for us” (Romans 5:8).

The death of Jesus satisfied all the requirements, but Jesus did not stay dead. Three days later – Easter morning, He arose from the grave by His own power. “He is not here: for he is risen, as he said.” (Matthew 28:6) “No man taketh it from me, but I lay it down of myself. I have power to lay it down, and I have power to take it again.”(John 10:18)

Having a relationship with God is not about being good or religious. It is claiming the work Jesus fulfilled on the cross.

Jesus said in John 6:47, “He that believeth on me hath everlasting life.”

Those who trust in Jesus are saved from the penalty and power of sin and will have eternal life in Heaven with God.

“For whosoever shall call upon the name of the Lord shall be saved” (Romans 10:13).

Place your trust in Jesus today.

Not posting as much

I have not been posting as much tech stuff on my blog as I want to. The reason for this is I have been mulling about the idea of submitting my tutorials to a publication and get paid for my work. I still have not decided if this is the right course of action. Not that I have to explain myself with the readers of my blog – I just wanted something other than a Christmas post as my newest blog post 😮

Merry Christmas

Merry Christmas from the Wippler family!

We pray that you are enjoying this season of celebration as we reflect upon our Saviour’s birth. Truly, for the Christian, Christ is the focal point of the season.

October was unusually mild and warm for Minnesota. Then on Friday, October 27, a snowstorm swept through our area dumping the highest one-day snowfall total ever recorded in Duluth since October 1933! Quite a rude awakening for this California family. However, we are loving the snow and are adjusting to the colder climate (although we’re told the worst is yet to come). Like today, when the high is going to be -10°F!

On the family side of things, it’s been a joy to watch the Lord work in Mollie’s life this past year–first with her salvation in June and then with her decision to be baptized 3 weeks ago. She has a tender heart, and we’re praying the Lord uses her greatly. On a humorous note, she decided yesterday she wanted to style her hair like “mommy’s.” While Nicole prepared lunch downstairs, Mollie was upstairs adjusting her hairstyle.

Fortunately, not too much damage was done, and the hairdresser informed us that having shorter layers in the front and longer in the back is the “in” look.

Meg turned 4 on November 7, while Clark celebrated his 3rd birthday on December 15! How time flies! Meg continues to astound us with her love of and ability to learn. She’s a thinker for sure: the other day during family devotions, Nicole asked her why Mary washed Jesus feet with her hair, hoping she’d say something along the lines of “because she loved Jesus.” Instead she responded with, “because she didn’t have any towels.”

Now that Jake’s gotten a little older, Clark and Jake have become car/train playing buddies. It’s hilarious to watch them playing/dragging their blankets together around the house. Both boys are becoming more articulate. After Clark fell on his face one day, Nicole teasingly said, “Oh no! Your face has broken into 100 pieces. Let’s get some glue.” With a panic stricken tone, Clark told Mollie, “Get the glue, Mollie! Hurry! My face is broken!” We’re truly blessed with 4 precious children!

Speaking of blessings, it’s been a tremendous blessing to see the Lord working in our church. At our annual Christmas dinner, we had many visitors with several making professions of salvation. The following Sunday morning we had our children’s Christmas program. The auditorium was filled, and several of our bus children had family in attendance. One little girl invited and saw her school mentor in attendance. We’re praying that many of these visitors will be saved, baptized, and then discipled in the upcoming months.

In closing, let us wish your family a Happy New Year! We truly appreciate your prayers as we seek to be faithful to the Lord and His calling upon our lives. You are in our thoughts and prayers as well.

Love,

Andrew and Nicole

kubernetes health check

The day before thanksgiving, I was pondering an issue I was having. I was pinning a package to a specific version in my Docker container and the repository I grabbed it from stopped offering this specific version. This resulted in a container that Jenkins responded as being built correctly, but missing an integral package that allowed my application to function properly. This led me to believe I had to implement Puppet’s Lumogon in my Jenkins build process. Curious if anyone had something like this already developed, I headed over to github.com which eventually led me to compose this tweet:

This readinessProbe communicates with kubeproxy to either allow a pod to be in service or out of service. At first I thought the readinessProbe was a once-and-done check, but I found out later this is not the case. When a pod gets launched, kubernetes waits until the container is in the ready state. We can define what consists of a ready container by the use of probes. Coupled with a kubernetes strategy, we can also define and ensure our application survives broken container updates.

Since the application I am supporting is already HTTP based, making an HTTP check to an endpoint that reports on connectivity to core services was the most trivial to implement. I created a script to verify connectivity to MariaDB, MongoDB, Memcached, Message Queue, and verified certain paths on the NFS share were present. All of these items are important to my application and most of them require certain configuration values in my containers to operate. Having kubernetes run this script every time there is a new pod verifies I will never an experience an outage due to a missing package again. As I mentioned before, I thought the readinessProbe was a once-and-done, however, I found that after implementing it, my metrics indicated the script was running every 10 seconds per every replica… this quickly added up!

After some chatting in the #kubernetes-users slack, I was able to get more understanding of the readinessProbe and how it was designed to communicate with kubeproxy so that you could “shut off” a container by taking it out of rotation. This was not the behavior I wanted so it was suggested that I create a state file. This state file is created after the check and, if it is present, it skips all checks. Due to the ephemeral nature of container storage, it can be assumed this file will never exist on a pod where this check has not been performed.

Adding a user to k8s RBAC

In order to add a user to a kubernetes cluster, we will need several things: kubectl, CA.crt and CA.key (found in your head node’s /etc/kubernetes/pki folder), and openssl.

First, create a private key for the new user. In this example, we will name the file employee.key:

openssl genrsa -out employee.key 2048

Next, we will need to create a certificate sign request – employee.csr – using the private key we just created (employee.key in this example). Make sure to specify your username and group in the -subj section (CN is for the username and O for the group).

openssl req -new -key employee.key -out employee.csr -subj "/CN=username/O=developer"

Generate the final certificate employee.crt by approving the certificate sign request, employee.csr, you made earlier. In this example, the certificate will be valid for 90 days.

openssl x509 -req -in employee.csr -CA CA.crt -CAkey CA.key -CAcreateserial -out employee.crt -days 90

Give employee.crt, employee.key, and CA.crt to the new employee and have the employee follow the below steps.

# Set up the cluster
$ kubectl config set-cluster k8s.domain.tld --server https://api.k8s.domain.tld --certificate-authority /path/to/CA.crt --embed-certs=true

# Set up the credentials (a.k.a login information)
$ kubectl config set-credentials <name> --client-certificate=/path/to/cert.crt --client-key=/path/to/cert.key --embed-certs=true

# bind login to server
$ kubectl config set-context k8s.domain.tld --cluster= k8s.domain.tld --user=<name>
# Optional: append `--namespace=<namespace>` to the command to set a default namespace.

Note: You may move the certificates to a safe location since the commands included --embed-certs=true. This saved the certs in base64 format in the kubernetes config.

Sometimes I post to my blog so I remember how to do a particular thing. This is one of those times.

Reusable containers with confd

I recently had the need to populate a file in a docker container based upon whether or not the container is in production or development. I eventually came across confd which let me populate data in files based upon particular environment variables. While confd excels with distributed key value stores, my needs (and infrastructure) is at a much simpler level.

Confd requires a few folders to store a toml (/etc/confd/conf.d/) and a template file (/etc/confd/templates/). When confd runs, it will look at the contents of each toml file in the conf.d directory and process them according to their instructions.

In my repository example, I am wanting a container to say hello to me when it senses a NAME environment variable and print out the current datetime. If no environment variable is set, only the datetime is printed out. To do this, I must create the toml file to look like this:

[template]
src = "echo.tmpl"
dest = "/echo"

This file is instructing confd to generate the echo file, place it in the root (/) and use /etc/confd/templates/echo.tmpl as the contents.

When we are building the container, we must include these configuration files and ensure confd is ran to generate the destination file. My example Dockerfile is doing just that by including all of the files in the container and running the docker-entrypoint script which is basically running confd and the newly generated file.

 andrew@wipplerxps > ~/git_repos/confd $  docker build -t blog-confd .
Sending build context to Docker daemon 57.34 kB
Step 1/9 : FROM centos:7.4.1708
 ---> 5076a7d1a386
Step 2/9 : LABEL maintainer "andrew.wippler@gmail.com"
 ---> Using cache
 ---> d712b31f7449
Step 3/9 : RUN mkdir -p /etc/confd/{conf.d,templates}
 ---> Running in f340bdcdf973
 ---> 1f0faa9b962f
Removing intermediate container f340bdcdf973
Step 4/9 : COPY docker/confd/ /etc/confd/
 ---> fb16dffc63ac
Removing intermediate container 133128cb7fc1
Step 5/9 : ADD https://github.com/kelseyhightower/confd/releases/download/v0.14.0/confd-0.14.0-linux-amd64 /usr/local/bin/confd
Downloading 17.61 MB/17.61 MB
 ---> a62b388274e6
Removing intermediate container 3f9ec343a5ab
Step 6/9 : RUN chmod +x /usr/local/bin/confd
 ---> Running in 1489dd02ea45
 ---> ab99a5fc5f95
Removing intermediate container 1489dd02ea45
Step 7/9 : COPY docker/docker-entrypoint.sh /var/local/
 ---> 16906971c8ef
Removing intermediate container 7a17a8e17e22
Step 8/9 : RUN chmod a+x /var/local/docker-entrypoint.sh
 ---> Running in 1562a6d06432
 ---> f963372159b1
Removing intermediate container 1562a6d06432
Step 9/9 : ENTRYPOINT /var/local/docker-entrypoint.sh
 ---> Running in 1b7e12c38b4c
 ---> f7d260597e0a
Removing intermediate container 1b7e12c38b4c
Successfully built f7d260597e0a
 andrew@wipplerxps > ~/git_repos/confd $  docker run blog-confd
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Backend set to env
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Starting confd
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Backend source(s) set to 
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Target config /echo out of sync
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Target config /echo has been updated
The current time is: Tue Nov 28 20:05:24 UTC 2017
 andrew@wipplerxps > ~/git_repos/confd $  docker run -e NAME="Andrew Wippler" blog-confd
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Backend set to env
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Starting confd
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Backend source(s) set to 
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Target config /echo out of sync
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Target config /echo has been updated
Hello Andrew Wippler
The current time is: Tue Nov 28 20:05:34 UTC 2017
 andrew@wipplerxps > ~/git_repos/confd $

While it is fun to say hello to yourself once in a while, I am using confd to modify an nginx.conf. When I pass in the SSL environment variable, nginx will listen on port 443 with a self signed cert and forward all HTTP traffic to HTTPS. Obviously in production, I want to use a real SSL cert. Using confd allows me to have the same docker container in development and production – the only difference being a configuration change.

Old glory

I was going through my articles I have collected over the years and found this little gem. The author is unknown.


I AM THE FLAG OF THE
UNITED STATES OF AMERICA

I am the flag of the United States of America.
My name is Old Glory.
I fly atop the world’s tallest buildings.
I stand watch in America’s halls of justice.
I fly majestically over institutions of learning.
I stand guard with power in the world.
Look up and see me.

Continue reading “Old glory”

Settling in

It has been a month and a half since we moved, and we finally are on a set schedule. The girls have started school, all boxes are unpacked, our house in California closed escrow, and we have successfully adjusted to the CST time zone.

It always amazes me how much God has blessed my family. We are all in good health, we are debt free, we have a roof over our head, and we have food to eat. I can attribute this all to God as I have no control over my health, I am horrible when it comes to big financial matters (such as retirement savings, stock trading, etc.), and I am not the best at selecting the best food choices for myself. It is only through God’s providential guidance that I am in the state I am.

I have been able to go fishing twice since arriving. As a result, I caught 2 fish. Neither were big enough to keep, but it was a great experience to enjoy. It seems the best fishing out here is on a lake. As a result, one needs a boat to get to the middle of the lake. This has caused me to research fishing kayaks. It is doubtful I will get one for the 2017 season, but if I save enough money, I can definitely get one for the 2018 season 🙂

Being a bi-vocational Assistant Pastor is rather weird. It is a role I have not experienced for too long, but I am very excited for what God has in store. Since being here in Duluth, I have been able to share the gospel with several people and see one make a profession for Christ. It was neat to see the “lightbulb” appear when he understood that Jesus came to do all the work of salvation for us and that salvation is not dependent upon our good works or how we live. It is very sad that many people are trapped in the religion of Catholicism, Lutheranism, and Atheism that they fail to see who Jesus is, why he came, and how we can experience life through a relationship with Him.

I am very thankful I can have a second job which is remote work and allows me to be in the tech industry. I like working with bleeding edge software such as Docker, Kubernetes, Puppet, and the like. It also allows me to scratch the itch I have with the nerdy side of tech – coding. I have been able to maintain a legacy PHP app while develop some in NodeJS and Go. Mostly I have been sharing cool things about Github, build pipelines, and the philosophy of “Let the robots do it.”

My coffee survival habit has been amplified by the purchase of a Ninja coffee maker last Amazon Prime day. I was not able to use it until we moved here and I am quite satisfied with how it makes iced coffee. It also gave me a recipe which I have been following for my daily iced coffee – it requires half and half with a few ounces of flavored syrup. I think this is the best approach for iced coffee.

Moving Adventures Part 2

(Note: This guest post is from my wife, Nicole. These are the events that happened July 25th, 2017.)

After 6 hours of rest, we awakened rested (somewhat) and ready to resume our journey. Morning was fairly uneventful, and we were back on the road by 9. As we were fueling up, we realized that Clark’s cup (which he went to bed with) was accidentally left somewhere amid the sheets despite going through the room 2-3 times. No worries! Remembered that were had brought 2 extra sippy cups ?

As we merged on the I-76 we spotted what we believe to be our moving van. Too funny! The day before we’d seen the same truck as we left Barstow. At least we know our belongings should arrive by Friday, lol! An hour into our drive, all the kids except Mollie fell asleep.

Thankfully, the next leg of our journey was fairly uneventful, and we were able to stop for lunch in Kearney by 2:45. Other then an icky diaper and ordering the wrong sandwich for Andrew, lunch was quite pleasant. In fact, Jake ate more than Clark or Meg! Thanks to everyone’s prayers, the children were very well behaved as we made our way to the next stop–Des Moine.

For the next 3.5 hours: played with toys, watched cartoons, and slept (praise the Lord!!). Seems whenever we’re close to our destination one of the younger three decides they’ve had it and begins to fuss. Thankfully, Clark perks up with food or cars, and Jake loves snacks and playing with random items (wipes bucket, water bottle, thermos,  favorite afghan etc.). As a reward, hoping to get a little pool time at the hotel. So thankful we’re only driving 5 hours tomorrow and going to get a good 2 hour break at the Mall of America!

Moving Adventures Part 1

(Note: This guest post is from my wife, Nicole. These are the events that happened July 24th, 2017.)

Well, after 4 hours of sleep, we woke and were on the road by 5. Forgot one little detail… taking girls to the restroom before we left. When we reached Adelanto, Mollie said she had to go the bathroom. Pulled in to the first gas station we saw and wouldn’t you know it, the restrooms were both out of order ? Bought a potty seat for emergencies such as this.

In the mean time, Meg decided she wanted to get dressed. Dressed the girls and was handing out the snack when Jake gagged himself and threw up the banana I just fed him! Cleaned him up, scrounged around to find the clothes I’d stuffed in our overloaded van. A 10 minute stop took 30 minutes ?

Back on the road! Fortunately, despite a quick run through McDonald’s, we reached St. George, Utah by noon. As we unloaded the car, we spent 10 minutes in 107° weather looking for Mollie’s shoe and 3 elusive crayons. Finally decided to look when we returned from lunch and have Mollie ride in the shopping cart with one shoe.

Took Jake out of the car only to discover he’d blown out his diaper! He didn’t want to be changed because his poor rear is so irritated. Think the ladies in the Costco restroom thought I was crazy as I held down a squirmy, stinky baby. Once he was changed, I took the girls to the restroom and heard someone commenting on the smell in the bathroom! Way to go baby boy!!

Returned to the van and spent another 10 minutes finding the lost flip flop and crayons. Between the puke and diaper blowout, we’d nearly used our stash of wipes. So had to run to Walmart of course! Back to the road!

Thankfully the kids played quietly with their aqua doodle pads, and the girls fell asleep ? Just as Jake was starting to fuss, we passed through a rainstorm, and the boys stared out the window awestruck. Rain is discovered (by the way, the storm cleaned our windshield better than the Lancaster Cruz Thru)!

Things went fairly well for the next few hours. Kids napped and ate dinner at Wendy’s. About 20 minutes into the home stretch, I heard a noise any mother learns to dread–Clark was about to spew. Managed to get most of his dinner in the plastic bag I’d brought for such emergencies. After a quick clean up job, we were back in business. Five minutes later though, we had to stop again because we suspected Jake had dirtied his diaper. False alarm!

Why is it the last 100 miles of any trip seems the longest. After hitting a few construction spots, we finally pulled into Denver at midnight. Sadly, kids who are awakened at midnight are not exactly pleasant. Mollie did okay, but Meg is a bear whenever her sleep’s interrupted. Clark perked up a bit as we trekked across the parking lot. Once we reached the unfamiliar hotel room, however, he started crying that he wanted out of the room! Andrew left me with the three younger ones while he and Mollie retrieved our luggage. In the mean time, Jake’s full diaper leaked out onto my shirt. Andrew hadn’t returned so I quickly stripped Jake down and threw him into the tub. At which point Jake joined the crying chorus ? I half expected the manager to come throw us out any minute with all the ruckus we were causing.

Andrew finally came with the luggage (except for the toiletries bag but that’s another story). Prepared for bed and let the kids watch a little episode of Daniel Tiger’s Neighborhood before settling in for a few hours of much needed rest!

Leaving California

Today marks the first day in over 29 years where I am no longer a resident of the great state of California. No longer will iscaliforniaonfire.com be relevant to me or my family! With all changes to life there are some good things and bad things with every major life decision.

From the move, I will miss the following:

  • In-n-Out burgers
  • Nearby family members
  • Lancaster Baptist Church

However, I will not miss the following:

  • 100 degree weather
  • California driving
  • Smog
  • Time Warner Cable

I am also super pumped about these:

  • More fishing opportunities
  • Snow
  • A new ISP
  • Dunkin donuts

Autosign Puppet certificates on AWS

Let’s face it, Puppet’s method of certificates is a pain and huge administration overkill if done manually. Thankfully, puppet has designed several methods of auto-signing certificates. One of which is via crafting a special certificate signing request and verifying the certificate signing request is genuine.

On the puppet master

Apply the following code on your puppet master. This will set up the autosign script which will verify your custom certificate signing request. If the CSR is genuine, the puppet master will sign the certificate.

  service { 'puppetserver':
    ensure => running,
    enable => true,
  }

# The file must have execute permissions
# The master will trigger this as `/etc/puppetlabs/puppet/autosign.sh FQDN`
  file { '/etc/puppetlabs/puppet/autosign.sh':
    ensure  => file,
    mode    => '0750',
    owner   => 'puppet',
    group   => 'puppet',
    content => '#!/bin/bash
HOST=$1
openssl req -noout -text -in "/etc/puppetlabs/puppet/ssl/ca/requests/$HOST.pem" | grep pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa',
  }

# This sets up the required ini setting and restarts the puppet master service
  ini_setting {'autosign nodes':
    ensure  => present,
    path    => '/etc/puppetlabs/puppet/puppet.conf',
    section => 'master',
    setting => 'autosign',
    value   => '/etc/puppetlabs/puppet/autosign.sh',
    notify  => Service['puppetserver'],
    require => File['/etc/puppetlabs/puppet/autosign.sh']
  }

On the agents

With our puppet master ready to go, we need to set up our agents to generate the custom certificate request. This can be done by editing /etc/puppetlabs/puppet/csr_attributes.yaml before running puppet with the following content:

custom_attributes:
    1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa
extension_requests:
    pp_instance_id: $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
    pp_image_name:  $(curl -s http://169.254.169.254/latest/meta-data/ami-id)

Note: The 1.2.840.113549.1.9.7 value must match the item you are grepping for in the autosigning request. This specific value in the certificate is reserved for purposes such as this.

Execution

With everything in place, the way to execute this successfully is to pass in the below as the userdata script when creating an EC2 instance:

#!/bin/sh
if [ ! -d /etc/puppetlabs/puppet ]; then
   mkdir /etc/puppetlabs/puppet
fi
cat > /etc/puppetlabs/puppet/csr_attributes.yaml << YAML
custom_attributes:
    1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa
extension_requests:
    pp_instance_id: $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
    pp_image_name:  $(curl -s http://169.254.169.254/latest/meta-data/ami-id)
YAML

An alternative method is to create a custom AMI (especially for auto-scaling groups). I use the below puppet code to create my golden AMI.

  cron { 'run aws_cert at reboot':
    command => '/aws_cert.sh',
    user    => 'root',
    special => 'reboot',
    require => File['/aws_cert.sh'],
  }

  file { '/aws_cert.sh':
    ensure  => file,
    mode    => '0755',
    content => '#!/bin/sh
if [ ! -d /etc/puppetlabs/puppet ]; then
   mkdir /etc/puppetlabs/puppet
fi
cat > /etc/puppetlabs/puppet/csr_attributes.yaml << YAML 
custom_attributes: 
  1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa 
extension_requests: 
  pp_instance_id: $(curl -s http://169.254.169.254/latest/meta-data/instance-id) 
  pp_image_name: $(curl -s http://169.254.169.254/latest/meta-data/ami-id) 
YAML 

export CERTNAME="aws-node_name-`date +%s`" 

/opt/puppetlabs/bin/puppet apply -e "ini_setting {\"certname\": \ ensure => present, \
  path => \"/etc/puppetlabs/puppet/puppet.conf\", \
  section => \"main\", \
  setting => \"certname\", \
  value   => $CERTNAME, \
  }"

/opt/puppetlabs/bin/puppet agent -t -w 5',
  }

My tablet history and Kindle Fire (7th Gen) review

My first tablet was an Acer A500 which ran Honeycomb (Android 3.0). I used that laptop for everything – reading, pictures, studying, and using it to project games in the children’s class I taught at the time. It was used more than my laptop, phone, and desktop combined. It served its purpose until my wife accidentally knocked it off the kitchen counter (it was laying flat) and it got a huge dent. It never was the same after that incident. I eventually got rid of it after it no longer held a charge due to the damage it received. 

The next tablet I purchased was a Kindle fire 2nd gen which I gave to my wife and borrowed it when needed. Life was great. Then, as a reward for good behavior, we let our children play the games and apps we purchased on the Kindle. Our children are so well behaved (thanks to some awesome parentig tips we received in our Adult Bible Class at church) that my wife no longer had enough Kindle time to do her reading. This is when I bought her a 5th gen Kindle fire. The difference between versions impressed me. I liked the thinner design and the new UI was very intriguing. I made a determination if I ever wanted a new tablet I would get myself a Kindle fire. (I was not in the market for one as I was a happy 6 inch phablet user.)

Recently, I have had a desire to reduce my reading list by actually reading the books. At first I tried reading from my phablet, but alas, a 6inch screen is not ideal for reading large amounts of text – even if you have a high DPI phone such as my Google Nexus 6. This has led me to purchase a Kindle fire 7th gen which was released on June 7th 2017.

It has the same intriguing design as the 5th gen device, but has better battery life and more external storage capacity. When I first unboxed and turned on the device, I was happy at how little I had to do to get my new Kindle operational. After 10 minutes of using Kindle fire OS (which is just Android without Google), I quickly realized just how attached I am to Google services. Most of my daily use apps – such as Dropbox and JuiceSSH – were not available. Did I make a mistake buying this $70 device (+$30 case and $9 tax)? Thankfully, there is an alternative and it works quite well – it doesn’t even require rooting your device either!

Wow, after installing the four apps, running a Google play services update, and downloading my needed apps, I am really enjoying my Kindle fire 7th gen! Everything is working as expected; albeit, I have to wait a few milliseconds longer over my phablet to do common tasks. The screen size is perfect, the weight and style is also perfect, and I was able to be different and get a yellow one. (My wife has blue, black is the color I usually get, and red seemed too plastic.)

Moving to Desktop GNU/Linux from Windows/Mac

There are many curious individuals who tinker with GNU/Linux as a Server OS and want to experience what it is like as a Desktop OS. The switch is often hindered by two obstacles:

  1. Some daily use programs are not available. (i.e. Photoshop, iTunes, etc.)
  2. The unknown of what to do if something goes wrong or what do I do to get my 3d graphics driver installed and working.

While these are valid reasons and definitely show stoppers for some, others can safely migrate to GNU/Linux.

The obstacle of programs

I like Krita as an alternative to Photoshop. The menu options are nearly the same and I do not have to install a silly theme (like I have to do in Gimp) or re-learn photo editing just to recognize where everything is at. I have successfully installed Photoshop CS4 with wine without any issues, but Krita is more featured than CS4. Darktable is also a good alternative to Photoshop RAW/bridge.

Rhythmbox connects to iPhones/iPods the same way as iTunes does, but without the store. iTunes does run on a recent version of wine quite well. Some might also want to check out Clementine.

Most every program has an alternative. Alternatives can be found via alternativeto.net or software recommendations on StackExchange.

The unknown obstacles

To use GNU/Linux successfully as the primary Desktop OS, in my opinion, one must have a desktop with worthy hardware. I consider myself an AMD guy. I like the price for performance and I rarely do CPU intensive tasks on my desktop. When AMD bought ATI, I was also happy as ATI was my favorite graphics card. Unfortunately, most Desktop GNU/Linux users are developers and need that extra performance. They have desktop workstations that have Nvidia graphics cards in them with Intel CPUs. You will often find that Desktop GNU/Linux performs better, is easier to use, and has more tutorials for Nvidia graphics cards and how to get them working.

Captive Portal Overview

I originally authored this on Aug 16, 2016 at http://unix.stackexchange.com. Considering my tutorial did not include an overview, I thought I would re-post it on my blog.


To make a captive portal appear, you need to stop all internet traffic and provide a 302 redirectto the client’s browser. To do this, you need to have a firewall (like iptables) redirect all traffic to a webserver (like nginxapache, etc) where the webserver responds with a 302 redirect to the url of your login page.

I have written a lengthy article on how to do this with a Raspberry Pi. It basically boils down to the iptables block/redirect to webserver:

iptables -t nat -A wlan0_Unknown -p tcp --dport 80 -j DNAT --to-destination 192.168.24.1

and then the webserver (nginx) redirecting to the login page:

# For iOS
if ($http_user_agent ~* (CaptiveNetworkSupport) ) {
    return 302 http://hotspot.localnet/hotspot.html;
}

# For others
location / {
    return 302 http://hotspot.localnet/;
}

iOS has to be difficult in that it needs the WISP settings. hotspot.html contents are as follows:

<!--
<?xml version="1.0" encoding="UTF-8"?>
<WISPAccessGatewayParam xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.wballiance.net/wispr_2_0.xsd">
<Redirect>
<MessageType>100</MessageType>
<ResponseCode>0</ResponseCode>
<VersionHigh>2.0</VersionHigh>
<VersionLow>1.0</VersionLow>
<AccessProcedure>1.0</AccessProcedure>
<AccessLocation>Andrew Wippler is awesome</AccessLocation>
<LocationName>MyOpenAP</LocationName>
<LoginURL>http://hotspot.localnet/</LoginURL>
</Redirect>
</WISPAccessGatewayParam>
-->