Leaving California

Today marks the first day in over 29 years where I am no longer a resident of the great state of California. No longer will iscaliforniaonfire.com be relevant to me or my family! With all changes to life there are some good things and bad things with every major life decision.

From the move, I will miss the following:

  • In-n-Out burgers
  • Nearby family members
  • Lancaster Baptist Church

However, I will not miss the following:

  • 100 degree weather
  • California driving
  • Smog
  • Time Warner Cable

I am also super pumped about these:

  • More fishing opportunities
  • Snow
  • A new ISP
  • Dunkin donuts

Autosign Puppet certificates on AWS

Let’s face it, Puppet’s method of certificates is a pain and huge administration overkill if done manually. Thankfully, puppet has designed several methods of auto-signing certificates. One of which is via crafting a special certificate signing request and verifying the certificate signing request is genuine.

On the puppet master

Apply the following code on your puppet master. This will set up the autosign script which will verify your custom certificate signing request. If the CSR is genuine, the puppet master will sign the certificate.

  service { 'puppetserver':
    ensure => running,
    enable => true,
  }

# The file must have execute permissions
# The master will trigger this as `/etc/puppetlabs/puppet/autosign.sh FQDN`
  file { '/etc/puppetlabs/puppet/autosign.sh':
    ensure  => file,
    mode    => '0750',
    owner   => 'puppet',
    group   => 'puppet',
    content => '#!/bin/bash
HOST=$1
openssl req -noout -text -in "/etc/puppetlabs/puppet/ssl/ca/requests/$HOST.pem" | grep pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa',
  }

# This sets up the required ini setting and restarts the puppet master service
  ini_setting {'autosign nodes':
    ensure  => present,
    path    => '/etc/puppetlabs/puppet/puppet.conf',
    section => 'master',
    setting => 'autosign',
    value   => '/etc/puppetlabs/puppet/autosign.sh',
    notify  => Service['puppetserver'],
    require => File['/etc/puppetlabs/puppet/autosign.sh']
  }

On the agents

With our puppet master ready to go, we need to set up our agents to generate the custom certificate request. This can be done by editing /etc/puppetlabs/puppet/csr_attributes.yaml before running puppet with the following content:

custom_attributes:
    1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa
extension_requests:
    pp_instance_id: $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
    pp_image_name:  $(curl -s http://169.254.169.254/latest/meta-data/ami-id)

Note: The 1.2.840.113549.1.9.7 value must match the item you are grepping for in the autosigning request. This specific value in the certificate is reserved for purposes such as this.

Execution

With everything in place, the way to execute this successfully is to pass in the below as the userdata script when creating an EC2 instance:

#!/bin/sh
if [ ! -d /etc/puppetlabs/puppet ]; then
   mkdir /etc/puppetlabs/puppet
fi
cat > /etc/puppetlabs/puppet/csr_attributes.yaml << YAML
custom_attributes:
    1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa
extension_requests:
    pp_instance_id: $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
    pp_image_name:  $(curl -s http://169.254.169.254/latest/meta-data/ami-id)
YAML

An alternative method is to create a custom AMI (especially for auto-scaling groups). I use the below puppet code to create my golden AMI.

  cron { 'run aws_cert at reboot':
    command => '/aws_cert.sh',
    user    => 'root',
    special => 'reboot',
    require => File['/aws_cert.sh'],
  }

  file { '/aws_cert.sh':
    ensure  => file,
    mode    => '0755',
    content => '#!/bin/sh
if [ ! -d /etc/puppetlabs/puppet ]; then
   mkdir /etc/puppetlabs/puppet
fi
cat > /etc/puppetlabs/puppet/csr_attributes.yaml << YAML 
custom_attributes: 
  1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa 
extension_requests: 
  pp_instance_id: $(curl -s http://169.254.169.254/latest/meta-data/instance-id) 
  pp_image_name: $(curl -s http://169.254.169.254/latest/meta-data/ami-id) 
YAML 

export CERTNAME="aws-node_name-`date +%s`" 

/opt/puppetlabs/bin/puppet apply -e "ini_setting {\"certname\": \ ensure => present, \
  path => \"/etc/puppetlabs/puppet/puppet.conf\", \
  section => \"main\", \
  setting => \"certname\", \
  value   => $CERTNAME, \
  }"

/opt/puppetlabs/bin/puppet agent -t -w 5',
  }

My tablet history and Kindle Fire (7th Gen) review

My first tablet was an Acer A500 which ran Honeycomb (Android 3.0). I used that laptop for everything – reading, pictures, studying, and using it to project games in the children’s class I taught at the time. It was used more than my laptop, phone, and desktop combined. It served its purpose until my wife accidentally knocked it off the kitchen counter (it was laying flat) and it got a huge dent. It never was the same after that incident. I eventually got rid of it after it no longer held a charge due to the damage it received. 

The next tablet I purchased was a Kindle fire 2nd gen which I gave to my wife and borrowed it when needed. Life was great. Then, as a reward for good behavior, we let our children play the games and apps we purchased on the Kindle. Our children are so well behaved (thanks to some awesome parentig tips we received in our Adult Bible Class at church) that my wife no longer had enough Kindle time to do her reading. This is when I bought her a 5th gen Kindle fire. The difference between versions impressed me. I liked the thinner design and the new UI was very intriguing. I made a determination if I ever wanted a new tablet I would get myself a Kindle fire. (I was not in the market for one as I was a happy 6 inch phablet user.)

Recently, I have had a desire to reduce my reading list by actually reading the books. At first I tried reading from my phablet, but alas, a 6inch screen is not ideal for reading large amounts of text – even if you have a high DPI phone such as my Google Nexus 6. This has led me to purchase a Kindle fire 7th gen which was released on June 7th 2017.

It has the same intriguing design as the 5th gen device, but has better battery life and more external storage capacity. When I first unboxed and turned on the device, I was happy at how little I had to do to get my new Kindle operational. After 10 minutes of using Kindle fire OS (which is just Android without Google), I quickly realized just how attached I am to Google services. Most of my daily use apps – such as Dropbox and JuiceSSH – were not available. Did I make a mistake buying this $70 device (+$30 case and $9 tax)? Thankfully, there is an alternative and it works quite well – it doesn’t even require rooting your device either!

Wow, after installing the four apps, running a Google play services update, and downloading my needed apps, I am really enjoying my Kindle fire 7th gen! Everything is working as expected; albeit, I have to wait a few milliseconds longer over my phablet to do common tasks. The screen size is perfect, the weight and style is also perfect, and I was able to be different and get a yellow one. (My wife has blue, black is the color I usually get, and red seemed too plastic.)

Moving to Desktop GNU/Linux from Windows/Mac

There are many curious individuals who tinker with GNU/Linux as a Server OS and want to experience what it is like as a Desktop OS. The switch is often hindered by two obstacles:

  1. Some daily use programs are not available. (i.e. Photoshop, iTunes, etc.)
  2. The unknown of what to do if something goes wrong or what do I do to get my 3d graphics driver installed and working.

While these are valid reasons and definitely show stoppers for some, others can safely migrate to GNU/Linux.

The obstacle of programs

I like Krita as an alternative to Photoshop. The menu options are nearly the same and I do not have to install a silly theme (like I have to do in Gimp) or re-learn photo editing just to recognize where everything is at. I have successfully installed Photoshop CS4 with wine without any issues, but Krita is more featured than CS4. Darktable is also a good alternative to Photoshop RAW/bridge.

Rhythmbox connects to iPhones/iPods the same way as iTunes does, but without the store. iTunes does run on a recent version of wine quite well. Some might also want to check out Clementine.

Most every program has an alternative. Alternatives can be found via alternativeto.net or software recommendations on StackExchange.

The unknown obstacles

To use GNU/Linux successfully as the primary Desktop OS, in my opinion, one must have a desktop with worthy hardware. I consider myself an AMD guy. I like the price for performance and I rarely do CPU intensive tasks on my desktop. When AMD bought ATI, I was also happy as ATI was my favorite graphics card. Unfortunately, most Desktop GNU/Linux users are developers and need that extra performance. They have desktop workstations that have Nvidia graphics cards in them with Intel CPUs. You will often find that Desktop GNU/Linux performs better, is easier to use, and has more tutorials for Nvidia graphics cards and how to get them working.

Captive Portal Overview

I originally authored this on Aug 16, 2016 at http://unix.stackexchange.com. Considering my tutorial did not include an overview, I thought I would re-post it on my blog.


To make a captive portal appear, you need to stop all internet traffic and provide a 302 redirectto the client’s browser. To do this, you need to have a firewall (like iptables) redirect all traffic to a webserver (like nginxapache, etc) where the webserver responds with a 302 redirect to the url of your login page.

I have written a lengthy article on how to do this with a Raspberry Pi. It basically boils down to the iptables block/redirect to webserver:

iptables -t nat -A wlan0_Unknown -p tcp --dport 80 -j DNAT --to-destination 192.168.24.1

and then the webserver (nginx) redirecting to the login page:

# For iOS
if ($http_user_agent ~* (CaptiveNetworkSupport) ) {
    return 302 http://hotspot.localnet/hotspot.html;
}

# For others
location / {
    return 302 http://hotspot.localnet/;
}

iOS has to be difficult in that it needs the WISP settings. hotspot.html contents are as follows:

<!--
<?xml version="1.0" encoding="UTF-8"?>
<WISPAccessGatewayParam xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.wballiance.net/wispr_2_0.xsd">
<Redirect>
<MessageType>100</MessageType>
<ResponseCode>0</ResponseCode>
<VersionHigh>2.0</VersionHigh>
<VersionLow>1.0</VersionLow>
<AccessProcedure>1.0</AccessProcedure>
<AccessLocation>Andrew Wippler is awesome</AccessLocation>
<LocationName>MyOpenAP</LocationName>
<LoginURL>http://hotspot.localnet/</LoginURL>
</Redirect>
</WISPAccessGatewayParam>
-->

Tupperware announces new container platform

Today, in a surprise move into technology, Tupperware has released a new container platform competing with Docker and rkt. Tupperware’s new platform – named Bowl – has been in alpha for the past 6 months, but now has achieved public beta status.

Tupperware party
Early photograph of a Tupperware party. This announcement was made at a similar one.

What’s unique about Bowl is that it introduces a new concept to containers – Lids. A Lid goes neatly on top of the container and seals the contents from evil doers and prevents bad code from escaping. Bowls, due to their circular shape, require burping the attached Lid. This burping maneuver was demonstrated live to the audience and was described as a method to keep the containerized application fresh across hybrid clouds. With Burping, no longer do we have to worry about our code going stale inside of the container.

Unlike other containers, when done with a Bowl, they go through a cleaning cycle and are available for reuse immediately or stacked in a cupboard for later. Lids, associated with Bowls, are placed underneath to keep the associated roles in relative proximity. Unlike some deployments which require planetary alignment, Bowl comes in several predetermined sizes which negate this prerequisite. If for some reason your code does not fit in a Bowl size, multiple Bowls can be used. This feature instantly transforms your monolith application into a microservice architecture – all without refactoring your workflow!

Bowl will be showcased in global parties as early as next week, but preview respondents have come across with the following remarks about Bowl:

It is so good that Tupperware has launched into the tech business. Tupperware already has such a good influence on minority groups and bringing an interest of tech to them is really a great idea.

Joan P

This isn’t your grandma’s container technology.

Jon S

I was surprised how well Bowl stacked up to the challenge.

Andrew B

And when you’re done, it doubles as a barber aid!

Rob N

This is exactly the kind of container I was looking for – Lightweight, secure, reusable, and dishwasher safe.

Chris B

Bowl is expected to integrate nicely with Kubernetes 1.8 which is due out next year.

Captive Portal Restaurant Menu

I have been contacted several times in regards to my captive portal post. In India there seems to be a surge in popularity for restaurants to have an open WiFi that prompts a user to open up a menu/splash page. The caveat being the legal issues encountered when providing free, open wireless internet. In order to avoid legal issues, the device that broadcasts must be disconnected from the internet. Although I am curious if just blocking internet access for those connecting is enough. 

It seems like an interesting issue to tackle, but I think creating something out of a WiFi captive portal would be like hammering a square peg through a round hole. It might work (if given enough time and effort), but in the end, it is probably not the right tool for the job.

While writing this post, I was reminded how Google appears to be tackling this problem. On my Android phone, I let the NSA spy on my whereabouts by enabling location services. It also lets my wife pinpoint where I am physically located. With it enabled, I can visit most shops in my area and get a Google maps prompt with the business information, reviews, and a few pictures. (Side note: if you appear in court over a childish/dumb action, you validate the judge’s decision when you post a negative review of the court house. Also please do not butcher the English language when trying to review places.)

Utilizing GPS location as well as having an app that provides the information seems like the best route to go in this circumstance. An alternative would be to have an app with WiFi credentials hardcoded in, listen for when a WiFi connection is made, check to see if it matches a predefined SSID, and attempt to communicate with a local app server to process data. Of course doing something like that is outside the scope of my tutorials.

Jumping the ship on Evernote

I am a long time user of Evernote. Currently it has the best browser extensions, a wide range of supported operating systems, and it has a free tier; however, I am getting frustrated with it. In the past year, they have changed plans twice – now the free tier is only supported on 2 platforms. This has cost me to re-evaluate my use of Evernote. Lately all I have been using Evernote for is to sync a grocery list between devices and keeping my children’s memories in one location – their sayings, artwork, etc. In the past I also used it for note taking, article saving, and inputting ideas. I have also seriously considered buying a subscription just so I can continue uninterrupted.

While this may be a rant about a free user using a free service, I contribute to the monitization of their service by the viewing of advertisements. The free tier limits (except for maximum devices) are adequate for my occasional use and probably have cost Evernote around $3 total in the past several years. The valuation Evernote has placed on their second-level tier ($35/year) is much higher than I value it (~$12/year). While I may not be able to set the price on what Evernote costs, I can put a price on what I am willing to pay for a simple note service.

A recent article on opensource.com opened my eyes to looking at note taking alternatives. I was surprised at how mature Paperwork was; however, it contained one simple flaw that throws my grocery list experience out the window – no checkbox option. This caused me to evaluate Google Keep – yes, has check boxes, but functions more like sticky notes. Then I remembered Atlassian’s confluence has checkboxes. Their paid version is $10 for up to ten users (per year if it self hosted, monthly if in the cloud). This fits my budget, I can create grocery lists, take notes, and create notebooks/spaces. While I have not switched away yet, confluence seems like a viable option as I already have an always-on home server.

I do not use the kuerig – here is why…

The kuerig device is a visually pleasing design. It appears to belong in the modern kitchen. A few months ago, I was given a kuerig first generation with a reusable filter and used it as my primary coffee consumption device. It gave me a sense of faster coffee delivery in the morning – I was happy until I discovered these flaws:

Flaw #1 – I spent more time making coffee than with a drip machine.

While it had a reservoir of water, that only lasted for about 6 tall glasses of coffee. I would have to switch out the K cup if I wanted a cup in the morning and one to take with – very common thing for me to do. This led me to another flaw.

Flaw #2 – the kuerig is designed for casual coffee drinkers.

By casual I mean 3-6 cups a month. Even with a refillable K cup, I was spending twice the amount on coffee and found myself adding 5 minutes to my normal routine just for use of the kuerig.

Flaw #3 – coffee dust

The coffee ground too much in store bought K cups and my refillable K cup often found itself in the bottom of my glass. This was disgusting and I could not stand throwing away the last sip of coffee because it had coffee dust at the bottom. To combat this, I had to cut filters in the shape of my K cup.

After 2 months of trouble with the kuerig, I got frustrated with drinking coffee. What was designed to be a pleasant, easy experience in making coffee turned out to be painful, time consuming, and more expensive. I evaluated my habit with the kuerig and found I was doing the same exact items with my old drip system, but spent more time affixing it to the kuerig. Once I realized that, I switched back to my old ways, sold the kuerig and bought more coffee with the money.

Debugging PHP web applications

In 2017, this topic seems a little dated and will probably not get me an opportunity to speak at a conference. While all of the elite programmers, cool kids, and CS grads are talking languages such as Go and Erlang – how to do tracing, performance testing, and the like – it seems very juvenile for me to write about PHP.

PHP is a language made specifically for the web. It is the first web language I learned after HTML 4/CSS. I learned it because it was easy. The syntax was easy, the variables – easy, running it – easy; however, when something broke, it was difficult as a beginner to troubleshoot. I would spend several hours looking at the code I just wrote only to find out I missed a semicolon or quote. This post is several things I wish I knew when I started with PHP. Continue reading Debugging PHP web applications

Few posts in the works

I have not posted in a few weeks. This was mainly due to getting a rest from posting every week of 2016! I have a few posts coming in the next few weeks. The first one will be about debugging PHP applications. The second one will be deploying a high availability MySQL cluster – what it looked like 10 years ago, and what it will look like 10 years from now. (HINT: Kubernetes + GlusterFS 😉 )

2016 behind, 2017 forward

With a year drawing to a close, I have a habit of looking back at my goals I set for myself, see how I have done, and set goals for the new year. My new year’s resolution for 2017 will be 1920×1080 (same as last year). I wish I could upgrade it to 5k, but it will have to do for now.

In 2016, I set a goal to post to my blog every week – I met that goal. I also planned to get more certs – of which I achieved my LFCE, COA, and Puppet Certified Professional. I also sharpened my ruby skills.

For 2017, I am not going to write on my blog every week. Instead I will write more lengthy blog posts and tutorials.

In 2016 we saw the Cubs win the world series, Microsoft join the Linux foundation, Google join the ASP.net foundation, and pigs actually flew. I can only imagine what 2017 will hold.

Easy unix epoch timestamps from CLI

While working on various projects and ultimately the need for a Unix timestamp for expiring swift objects in OpenStack, I needed a quick way to convert past, present, and future timestamps to the Unix epoch. Traditionally, I went to google, searched for a Unix timestamp converter, and retrieved my seconds that way. Unfortunately in exams, you are not allowed to visit external websites.

If you know how to read documentation, you will already know that the date command has this feature already built in. An excerpt from the docs is as follows:

 ...
       Show the local time for 9AM next Friday on the west coast of the US

              $ date --date='TZ="America/Los_Angeles" 09:00 next Fri'

DATE STRING
       The  --date=STRING  is  a mostly free format human readable date string
       such as "Sun, 29 Feb 2004 16:21:42 -0800" or "2004-02-29  16:21:42"  or
       even  "next Thursday".  A date string may contain items indicating cal‐
       endar date, time of day, time zone, day of week, relative  time,  rela‐
       tive date, and numbers.  An empty string indicates the beginning of the
       day.  The date string format is more complex than is easily  documented
       here but is fully described in the info documentation.
...

Further reading of the docs will point you in specifically formatting a return string by doing a date +%s. So when the time comes to expire an object from swift at 17:00 next Friday, you can do something like:

swift post container file -H 'X-Delete-On: `date +%s --date="17:00 next Friday"`'

OpenStack PS1 snippet

I have been studying for my OpenStack certification test (the COA) which is scheduled next week. One thing that was painful to keep track of was the user I was using to interface with OpenStack as the rc file you download from OpenStack does not update your PS1 prompt. I came up with the following solution and placed it in my ~/.bashrc:


function parse_os_user() {
    if [ ! "${OS_USERNAME}" == "" ]
    then
        echo "(${OS_USERNAME})"
    else
        echo ""
    fi
}

PS1='\u@\h \w `parse_os_user` \$ '

OpenStack certification

On Dec 20th, I am scheduled to take my COA exam. From the exam requirements page, it appears to be a somewhat moderately difficult exam. The few points I need work on are heat templates and swift object administration. A few things I know about the exam are what are publicly available via YouTube videos of the OpenStack summit sessions.

One of my troubles of studying for exams is creating content to test myself on the objectives of the exam. I look at the requirements and say to myself, “I know that,” and nothing gets written for that aspect. One thing I have done in the past is to search Github for exam prep questions. One I have found for OpenStack is AJNOURI/COA. He also made a nifty website for his test prep questions.

A key aspect that has helped me pass all of my open book exams is to recall the locations of my troubled areas. Looking at the docs/reading the manual has often come a best practice of mine. Most of the time, exam questions are covered in the docs as the exams expect you to have read them.