Category Archives: Linux

Autosign Puppet certificates on AWS

Let’s face it, Puppet’s method of certificates is a pain and huge administration overkill if done manually. Thankfully, puppet has designed several methods of auto-signing certificates. One of which is via crafting a special certificate signing request and verifying the certificate signing request is genuine.

On the puppet master

Apply the following code on your puppet master. This will set up the autosign script which will verify your custom certificate signing request. If the CSR is genuine, the puppet master will sign the certificate.

  service { 'puppetserver':
    ensure => running,
    enable => true,

# The file must have execute permissions
# The master will trigger this as `/etc/puppetlabs/puppet/ FQDN`
  file { '/etc/puppetlabs/puppet/':
    ensure  => file,
    mode    => '0750',
    owner   => 'puppet',
    group   => 'puppet',
    content => '#!/bin/bash
openssl req -noout -text -in "/etc/puppetlabs/puppet/ssl/ca/requests/$HOST.pem" | grep pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa',

# This sets up the required ini setting and restarts the puppet master service
  ini_setting {'autosign nodes':
    ensure  => present,
    path    => '/etc/puppetlabs/puppet/puppet.conf',
    section => 'master',
    setting => 'autosign',
    value   => '/etc/puppetlabs/puppet/',
    notify  => Service['puppetserver'],
    require => File['/etc/puppetlabs/puppet/']

On the agents

With our puppet master ready to go, we need to set up our agents to generate the custom certificate request. This can be done by editing /etc/puppetlabs/puppet/csr_attributes.yaml before running puppet with the following content:

    1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa
    pp_instance_id: $(curl -s
    pp_image_name:  $(curl -s

Note: The 1.2.840.113549.1.9.7 value must match the item you are grepping for in the autosigning request. This specific value in the certificate is reserved for purposes such as this.


With everything in place, the way to execute this successfully is to pass in the below as the userdata script when creating an EC2 instance:

if [ ! -d /etc/puppetlabs/puppet ]; then
   mkdir /etc/puppetlabs/puppet
cat > /etc/puppetlabs/puppet/csr_attributes.yaml << YAML
    1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa
    pp_instance_id: $(curl -s
    pp_image_name:  $(curl -s

An alternative method is to create a custom AMI (especially for auto-scaling groups). I use the below puppet code to create my golden AMI.

  cron { 'run aws_cert at reboot':
    command => '/',
    user    => 'root',
    special => 'reboot',
    require => File['/'],

  file { '/':
    ensure  => file,
    mode    => '0755',
    content => '#!/bin/sh
if [ ! -d /etc/puppetlabs/puppet ]; then
   mkdir /etc/puppetlabs/puppet
cat > /etc/puppetlabs/puppet/csr_attributes.yaml << YAML 
  1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa 
  pp_instance_id: $(curl -s 
  pp_image_name: $(curl -s 

export CERTNAME="aws-node_name-`date +%s`" 

/opt/puppetlabs/bin/puppet apply -e "ini_setting {\"certname\": \ ensure => present, \
  path => \"/etc/puppetlabs/puppet/puppet.conf\", \
  section => \"main\", \
  setting => \"certname\", \
  value   => $CERTNAME, \

/opt/puppetlabs/bin/puppet agent -t -w 5',

Moving to Desktop GNU/Linux from Windows/Mac

There are many curious individuals who tinker with GNU/Linux as a Server OS and want to experience what it is like as a Desktop OS. The switch is often hindered by two obstacles:

  1. Some daily use programs are not available. (i.e. Photoshop, iTunes, etc.)
  2. The unknown of what to do if something goes wrong or what do I do to get my 3d graphics driver installed and working.

While these are valid reasons and definitely show stoppers for some, others can safely migrate to GNU/Linux.

The obstacle of programs

I like Krita as an alternative to Photoshop. The menu options are nearly the same and I do not have to install a silly theme (like I have to do in Gimp) or re-learn photo editing just to recognize where everything is at. I have successfully installed Photoshop CS4 with wine without any issues, but Krita is more featured than CS4. Darktable is also a good alternative to Photoshop RAW/bridge.

Rhythmbox connects to iPhones/iPods the same way as iTunes does, but without the store. iTunes does run on a recent version of wine quite well. Some might also want to check out Clementine.

Most every program has an alternative. Alternatives can be found via or software recommendations on StackExchange.

The unknown obstacles

To use GNU/Linux successfully as the primary Desktop OS, in my opinion, one must have a desktop with worthy hardware. I consider myself an AMD guy. I like the price for performance and I rarely do CPU intensive tasks on my desktop. When AMD bought ATI, I was also happy as ATI was my favorite graphics card. Unfortunately, most Desktop GNU/Linux users are developers and need that extra performance. They have desktop workstations that have Nvidia graphics cards in them with Intel CPUs. You will often find that Desktop GNU/Linux performs better, is easier to use, and has more tutorials for Nvidia graphics cards and how to get them working.

Captive Portal Overview

I originally authored this on Aug 16, 2016 at Considering my tutorial did not include an overview, I thought I would re-post it on my blog.

To make a captive portal appear, you need to stop all internet traffic and provide a 302 redirectto the client’s browser. To do this, you need to have a firewall (like iptables) redirect all traffic to a webserver (like nginxapache, etc) where the webserver responds with a 302 redirect to the url of your login page.

I have written a lengthy article on how to do this with a Raspberry Pi. It basically boils down to the iptables block/redirect to webserver:

iptables -t nat -A wlan0_Unknown -p tcp --dport 80 -j DNAT --to-destination

and then the webserver (nginx) redirecting to the login page:

# For iOS
if ($http_user_agent ~* (CaptiveNetworkSupport) ) {
    return 302 http://hotspot.localnet/hotspot.html;

# For others
location / {
    return 302 http://hotspot.localnet/;

iOS has to be difficult in that it needs the WISP settings. hotspot.html contents are as follows:

<?xml version="1.0" encoding="UTF-8"?>
<WISPAccessGatewayParam xmlns:xsi="" xsi:noNamespaceSchemaLocation="">
<AccessLocation>Andrew Wippler is awesome</AccessLocation>

Captive Portal Restaurant Menu

I have been contacted several times in regards to my captive portal post. In India there seems to be a surge in popularity for restaurants to have an open WiFi that prompts a user to open up a menu/splash page. The caveat being the legal issues encountered when providing free, open wireless internet. In order to avoid legal issues, the device that broadcasts must be disconnected from the internet. Although I am curious if just blocking internet access for those connecting is enough. 

It seems like an interesting issue to tackle, but I think creating something out of a WiFi captive portal would be like hammering a square peg through a round hole. It might work (if given enough time and effort), but in the end, it is probably not the right tool for the job.

While writing this post, I was reminded how Google appears to be tackling this problem. On my Android phone, I let the NSA spy on my whereabouts by enabling location services. It also lets my wife pinpoint where I am physically located. With it enabled, I can visit most shops in my area and get a Google maps prompt with the business information, reviews, and a few pictures. (Side note: if you appear in court over a childish/dumb action, you validate the judge’s decision when you post a negative review of the court house. Also please do not butcher the English language when trying to review places.)

Utilizing GPS location as well as having an app that provides the information seems like the best route to go in this circumstance. An alternative would be to have an app with WiFi credentials hardcoded in, listen for when a WiFi connection is made, check to see if it matches a predefined SSID, and attempt to communicate with a local app server to process data. Of course doing something like that is outside the scope of my tutorials.

Easy unix epoch timestamps from CLI

While working on various projects and ultimately the need for a Unix timestamp for expiring swift objects in OpenStack, I needed a quick way to convert past, present, and future timestamps to the Unix epoch. Traditionally, I went to google, searched for a Unix timestamp converter, and retrieved my seconds that way. Unfortunately in exams, you are not allowed to visit external websites.

If you know how to read documentation, you will already know that the date command has this feature already built in. An excerpt from the docs is as follows:

       Show the local time for 9AM next Friday on the west coast of the US

              $ date --date='TZ="America/Los_Angeles" 09:00 next Fri'

       The  --date=STRING  is  a mostly free format human readable date string
       such as "Sun, 29 Feb 2004 16:21:42 -0800" or "2004-02-29  16:21:42"  or
       even  "next Thursday".  A date string may contain items indicating cal‐
       endar date, time of day, time zone, day of week, relative  time,  rela‐
       tive date, and numbers.  An empty string indicates the beginning of the
       day.  The date string format is more complex than is easily  documented
       here but is fully described in the info documentation.

Further reading of the docs will point you in specifically formatting a return string by doing a date +%s. So when the time comes to expire an object from swift at 17:00 next Friday, you can do something like:

swift post container file -H 'X-Delete-On: `date +%s --date="17:00 next Friday"`'

OpenStack PS1 snippet

I have been studying for my OpenStack certification test (the COA) which is scheduled next week. One thing that was painful to keep track of was the user I was using to interface with OpenStack as the rc file you download from OpenStack does not update your PS1 prompt. I came up with the following solution and placed it in my ~/.bashrc:

function parse_os_user() {
    if [ ! "${OS_USERNAME}" == "" ]
        echo "(${OS_USERNAME})"
        echo ""

PS1='\u@\h \w `parse_os_user` \$ '

OpenStack certification

On Dec 20th, I am scheduled to take my COA exam. From the exam requirements page, it appears to be a somewhat moderately difficult exam. The few points I need work on are heat templates and swift object administration. A few things I know about the exam are what are publicly available via YouTube videos of the OpenStack summit sessions.

One of my troubles of studying for exams is creating content to test myself on the objectives of the exam. I look at the requirements and say to myself, “I know that,” and nothing gets written for that aspect. One thing I have done in the past is to search Github for exam prep questions. One I have found for OpenStack is AJNOURI/COA. He also made a nifty website for his test prep questions.

A key aspect that has helped me pass all of my open book exams is to recall the locations of my troubled areas. Looking at the docs/reading the manual has often come a best practice of mine. Most of the time, exam questions are covered in the docs as the exams expect you to have read them.

Using Puppet to host a private RPM repository

A repository is a place where files are stored, indexed, and available through a package manager to anyone who has the repository information. With rpm based systems, a repository is created with a tool called createrepo. Most of the time, publicly available repositories already offer the packages your server needs. When you have a custom application you want to deploy (or even rebuild an existing application with your patches), it is best to distribute that package with a repository rather than a file share or some other means. Often a folder structure is created so that differing client OS versions can connect to the same repository and access versions compiled to that specific release. In my example below, I am not creating this folder structure as I am only serving one major release – Centos 7 – and the packages I am generating are website directories which are just a collection of portable code.

A private repository is not a tricky feat – all you have to do is serve the repository via https and require http basic authentication. You then configure the clients to connect to the repository with the basic authentication in the URL string (i.e. baseurl= The HTTPS protocol is not required to serve a repository, but it does prevent network snoopers from seeing your repository credentials.

Now that we know what is needed for a private repository, we can then define it in our puppet code.

node '' {

  file { '/var/yumrepos':
    ensure => directory,

  createrepo { 'yumrepo':
    repository_dir => '/var/yumrepos/yumrepo',
    repo_cache_dir => '/var/cache/yumrepos/yumrepo',
    enable_cron    => false, #optional cron job to generate new rpms every 10 minutes

  package { 'httpd':
    ensure => installed,

  httpauth { 'repouser':
    ensure    => present,
    file      => '/usr/local/nagios/etc/htpasswd.users',
    password  => 'some-long-password',
    mechanism => basic,
    require   => Package['httpd'],

  file { '/usr/local/nagios/etc/htpasswd.users':
    ensure => file,
    owner  => 'nginx',
    mode   => '0644',

    manage_repo    => true,
    package_source => 'nginx-mainline',

    www_root             => '/var/yumrepos/yumrepo',
    index_files          => [],
    autoindex            => 'on',
    rewrite_to_https     => true,
    ssl                  => true,
    auth_basic           => 'true',
    auth_basic_user_file => '/usr/local/nagios/etc/htpasswd.users',
    ssl_cert             => "/etc/puppetlabs/puppet/ssl/public_keys/$::fqdn.pem",
    ssl_key              => "/etc/puppetlabs/puppet/ssl/private_keys/$::fqdn.pem",
    vhost_cfg_prepend    => {
      'default_type'     => 'text/html',


For the above code to work, we need the required modules:

mod 'palli/createrepo', '1.1.0'
mod "puppet/nginx", "0.4.0"
mod "jamtur01/httpauth", "0.0.3"

We can then use the following declaration on our nodes to use this repository.

yumrepo {'private-repo':
  descr           => 'My Private Repo - x86_64',
  baseurl         => '',
  enabled         => 'true',
  gpgcheck        => 'false',
  metadata_expire => '1',

You now have a fully functional private repository – deploy your awesome software.

Repercussions from a 1.1 Tbsp DDoS

In case you missed it, the largest recorded Direct Denial of Service (DDoS) occurred. While under DDoS, a victim’s server (or servers) is under high load and cannot complete all requests that are requested by it. Basically, a DDoS victim is someone the attacker wants silenced on the internet. In order to send a DDoS of that magnitude, the attacker has to have control over many computers – a botnet. It is believed that this attack originated from over 150,000 computers in the IoT category (smart TVs, refrigerators, thermostats, etc.). Due to their poor default security, the IoT devices are easy targets for hackers who intend on adding them to their botnets. A recent article on Ars Technica points out the current issues with IoT and Linux kernel security, but with most articles of this nature, provides no clear cut solution to the problem we are experiencing. Below are my thoughts to this current situation and how it may be resolved.

We need a governing body to issue a seal of approval for IoT and anything that is compiled with the Linux kernel. Then we, as consumers, must use, buy, and encourage others to buy from the companies that have this seal. The governing body should ensure each company seeking the seal comply with the following criteria:

  1. Every new device created and sent to market has a minimum of 5 years worth of bi-monthly security patches and updates since the day of release to the public.
  2. In the event the company goes bankrupt, dissolves, or cannot support any older product they have released in the past 5 years, the company must provide schematics, instructions, or software that open source enthusiasts can recreate, patch, or upgrade the legacy product.
  3. No known vulnerability must be willingly left unpatched.
  4. When a CVE is identified on a company’s product, a test case must be created and run on that code base for every future release.
  5. A notification service must be in place when new updates are released and must be available in RSS or email form.
  6. Automatic updates should occur over HTTPS
  7. Backdoors, admin terminals, etc. should require a physical connector be applied on the device in order to grant access.

    For a potential company to get this approval, it may seem like an arduous task to get all the controls in place; however, by applying DevOps methodologies, these tasks can be a simple feat. This would require the governing body to not only enforce the list, but also have the training available to comply to this list. For this reason, I suggest the Linux Foundation to become this governing body and issue out seals of approval.

    First puppet module published

    I completed my first public module for puppet and submitted it to the puppet forge. It seems too simple to compile into a build and submit it to the forge; however, I made it public for these reasons:

    1. I needed experience with puppet code testing. This helped me at the most basic level.
    2. I felt like someone else could benefit from the code – even if it is one person.
    3. I wanted to do it.

    Still, the code seems too juvenile to be submitted to the forge. All it does is take the hostname of a Digital Ocean droplet and submit its IP address as a new DNS record inside of Digital Ocean DNS. The code is located here.

    I almost want to follow up with this and develop my duplicity module into reusable code for the community.

    Provisioning VMs with cloud init

    One of the easiest ways to deploy a virtual machine in oVirt is first to install the OS then turn it into a template. This will allow you to copy that template to deploy new instances. One mundane task after a new template is copied to a new instance is logging in, changing the IP, setting the hostname, setting up Puppet, running puppet, etc. cloud-init is the tool designed to fix that mundane task process by allowing those steps to be automated. oVirt/RHEV (as well as OpenStack, AWS, and others) allow you to pass in user data which is then supplied to cloud-init after the template is copied over and turned on. This allows for scripting on the new VM – easing deployment.

    For my environment, I wanted a CentOS 7 template. To have that, I must first install CentOS on a new VM and seal it (Windows calls this Sysprep). Before I seal it, I must install cloud-init and any other tools I might use for deployment – such as puppet. Here are the steps to obtain just that:

    Continue reading Provisioning VMs with cloud init

    Securing PWM

    In last week’s post we set up PWM insecurely. In this post, we are going to secure it down and install mysql to store the reset questions. This guide assumes you have this CentOS 7 server publicly accessible with ports 80 and 443 available to the entire world. First, we will need to install mysql, set up a database, and add a user to that database. To do that, we need to edit our manifest.pp and append the following:

     class { '::mysql::server':
         root_password           => 'My4cc0unt$$password!',
         remove_default_accounts => true,
         package_name            => 'mariadb-server',
         package_ensure          => 'installed',
         service_name            => 'mariadb',
     mysql::db { 'pwm':
         user     => 'pwm',
         password => 'pwm_passworD2!', # Can't do a password hash here :(
     class { 'mysql::bindings':
         java_enable => true,
    file { '/opt/tomcat8/pwm/lib/mysql-connector-java.jar':
         ensure  => link,
         target  => '/usr/share/java/mysql-connector-java.jar',
         require => Class['mysql::bindings']

    We will also need to install additional modules: Continue reading Securing PWM

    Linux training on sale until 7/31/16

    The Linux Foundation is offering select courses at a discount until 7/31/16. Some offers are up to 55% off. You can also get an additional 10% off in check-out by using the code GSHOP. That brings the prices down to:

    $180 – For Essentials of System Administration (LFS201) or Linux Networking and Administration (LFS211)

    $315 – Essentials of System Administration AND Linux Networking and Administration

    $269.10 – Certified Linux SysAdmin or Certified Linux Engineer

    $495 – Certified Linux Rockstar

    You can find these prices by using their special system administrator appreciation sale page and using the checkout code GSHOP. If you were looking for a time to level up your Linux knowledge, now is the time to do it.

    OpenWRT Captive Portal

    In a previous post, I explained how to set up a captive portal on a Raspberry Pi which was running Raspbian (Debian). If you read that article, you can skip the next paragraph.

    A captive portal is a piece of software that prompts for user interaction before allowing the client to access the internet or other resources on the network. It is a combination of a firewall and a webserver. In this tutorial, I will explain how to create an open WiFi network on OpenWRT firmware. Before deploying an open WiFi network, you may want to consult a lawyer of the legality and restrictions for having one. You can also review what has been said by lawyers here and here.

    To set up a captive portal on a wireless access point (WAP), you will need to have the OpenWRT firmware installed and have at least 5mb of free space. My TP-Link 1043ND had enough space and this article was tested against it. This article assumes you have OpenWRT installed without any additional addons and have plenty of space to spare.

    Continue reading OpenWRT Captive Portal

    Learn GNU/Linux the easy way

    Let’s face it, Linux is a kernel and no matter what distribution you use, it is all the same. You have a repository of packages, you get a package manager to manage your packages, you get a desktop environment, and you get freedom to tinker down to the lowest level of the kernel to configure things like IP routing and forwarding.

    Differences lie in the release cycle of the distribution, package names, and the default desktop environment – though you can find spins to even change that part. Each are trying to tackle a specific problem and come with a solution. It may be security, research, UI, stability, or even high performance computing.

    If starting out, pick a user friendly distribution like Ubuntu or Debian. Use it with the defaults for 6 months while trying to learn as much as you can. Then move onto the specific use cases for Enterprise, you can use CentOS, RedHat, Ubuntu, or SUSE (you will get the best hardware/software support if you go that route) for home use, you may want to go with Debian, Ubuntu, Arch, Gentoo, Fedora or anything you want to use; for embedded, you may go with Debian, Yocto, Gentoo, OpenEmbedded, OpenWRT, and others; for stability and security, you may want to go with Debian or one of the Enterprise distributions.

    At the end of the day, it is all built upon the Linux kernel – unless you are using the Debian BSD fork.