Category Archives: Tidbits

Captive Portal Restaurant Menu

I have been contacted several times in regards to my captive portal post. In India there seems to be a surge in popularity for restaurants to have an open WiFi that prompts a user to open up a menu/splash page. The caveat being the legal issues encountered when providing free, open wireless internet. In order to avoid legal issues, the device that broadcasts must be disconnected from the internet. Although I am curious if just blocking internet access for those connecting is enough. 

It seems like an interesting issue to tackle, but I think creating something out of a WiFi captive portal would be like hammering a square peg through a round hole. It might work (if given enough time and effort), but in the end, it is probably not the right tool for the job.

While writing this post, I was reminded how Google appears to be tackling this problem. On my Android phone, I let the NSA spy on my whereabouts by enabling location services. It also lets my wife pinpoint where I am physically located. With it enabled, I can visit most shops in my area and get a Google maps prompt with the business information, reviews, and a few pictures. (Side note: if you appear in court over a childish/dumb action, you validate the judge’s decision when you post a negative review of the court house. Also please do not butcher the English language when trying to review places.)

Utilizing GPS location as well as having an app that provides the information seems like the best route to go in this circumstance. An alternative would be to have an app with WiFi credentials hardcoded in, listen for when a WiFi connection is made, check to see if it matches a predefined SSID, and attempt to communicate with a local app server to process data. Of course doing something like that is outside the scope of my tutorials.

I do not use the kuerig – here is why…

The kuerig device is a visually pleasing design. It appears to belong in the modern kitchen. A few months ago, I was given a kuerig first generation with a reusable filter and used it as my primary coffee consumption device. It gave me a sense of faster coffee delivery in the morning – I was happy until I discovered these flaws:

Flaw #1 – I spent more time making coffee than with a drip machine.

While it had a reservoir of water, that only lasted for about 6 tall glasses of coffee. I would have to switch out the K cup if I wanted a cup in the morning and one to take with – very common thing for me to do. This led me to another flaw.

Flaw #2 – the kuerig is designed for casual coffee drinkers.

By casual I mean 3-6 cups a month. Even with a refillable K cup, I was spending twice the amount on coffee and found myself adding 5 minutes to my normal routine just for use of the kuerig.

Flaw #3 – coffee dust

The coffee ground too much in store bought K cups and my refillable K cup often found itself in the bottom of my glass. This was disgusting and I could not stand throwing away the last sip of coffee because it had coffee dust at the bottom. To combat this, I had to cut filters in the shape of my K cup.

After 2 months of trouble with the kuerig, I got frustrated with drinking coffee. What was designed to be a pleasant, easy experience in making coffee turned out to be painful, time consuming, and more expensive. I evaluated my habit with the kuerig and found I was doing the same exact items with my old drip system, but spent more time affixing it to the kuerig. Once I realized that, I switched back to my old ways, sold the kuerig and bought more coffee with the money.

Debugging PHP web applications

In 2017, this topic seems a little dated and will probably not get me an opportunity to speak at a conference. While all of the elite programmers, cool kids, and CS grads are talking languages such as Go and Erlang – how to do tracing, performance testing, and the like – it seems very juvenile for me to write about PHP.

PHP is a language made specifically for the web. It is the first web language I learned after HTML 4/CSS. I learned it because it was easy. The syntax was easy, the variables – easy, running it – easy; however, when something broke, it was difficult as a beginner to troubleshoot. I would spend several hours looking at the code I just wrote only to find out I missed a semicolon or quote. This post is several things I wish I knew when I started with PHP. Continue reading Debugging PHP web applications

Docker is not a source to blame

I have been reading a few articles that have been published recently regarding the use of docker in production. Of the articles I read, all seem to complain about the instability of docker, the docker ecosystem, and they lament persistent storage. While I have not run docker in production for a lengthy amount of time, I can determine these issues are from operator error and are not entirely docker’s fault.

One article I read came out and boldly said that docker created a new file system in one year and it is not humanly possible to have created one in such a short amount of time. I think this article writer has never heard of the DevOps philosophy nor the minimum viability product (MVP). Basically, you do not need 100% of the features to have a working product. This makes it clearly possible to build a single file system – though not have all of the features – within a short time frame. It is also noted that a year in this development process, a second file system was created. Just like in real life, if you wait to ship a product with 100% of the features, you will never ship the product.

If you are losing data due to not properly mounting the volumes to a HA storage network (such as GlusterFS or DRBD) – you deserve to have lost the data. I know what it is like to lose 50TB of unique data due to a failed storage device, no current backups, and the shame and cost of having to send it over to DriveSavers (which they are!) for recovery. That is a painful experience and not worth repeating. If you do the same thing and expect different results, the issue lies with the operator and not the tool. Drastic changes were made when loss occurred including developing a new backup solution and having 1-to-1 replication of the data. It has also fine grained a permanent memory in my subsystem to not let that happen ever again.

Personally, I think running docker in a public cloud is a waste of company resources – there is no price difference between a VM and 1 docker image on AWS’ EC2 container platform of the same capacity. Even if you spun up an Atomic Host or similar, you still have to deal with networking constraints for your file storage. This is something best handled in house as you can scale your network infrastructure to match your workload.

The most important factor in all of this fuss about docker is that it is open source software. If you do not have the capacity to find flaws, make patches, and submit those patches for review to upstream, you are better off using a proprietary product that does not have such needs. Again, the issue lies in the operators and not the tools.

Using Puppet to host a private RPM repository

A repository is a place where files are stored, indexed, and available through a package manager to anyone who has the repository information. With rpm based systems, a repository is created with a tool called createrepo. Most of the time, publicly available repositories already offer the packages your server needs. When you have a custom application you want to deploy (or even rebuild an existing application with your patches), it is best to distribute that package with a repository rather than a file share or some other means. Often a folder structure is created so that differing client OS versions can connect to the same repository and access versions compiled to that specific release. In my example below, I am not creating this folder structure as I am only serving one major release – Centos 7 – and the packages I am generating are website directories which are just a collection of portable code.

A private repository is not a tricky feat – all you have to do is serve the repository via https and require http basic authentication. You then configure the clients to connect to the repository with the basic authentication in the URL string (i.e. baseurl=https://user:pass@repo.example.com/). The HTTPS protocol is not required to serve a repository, but it does prevent network snoopers from seeing your repository credentials.

Now that we know what is needed for a private repository, we can then define it in our puppet code.

node 'repo.example.com' {

  file { '/var/yumrepos':
    ensure => directory,
  }

  createrepo { 'yumrepo':
    repository_dir => '/var/yumrepos/yumrepo',
    repo_cache_dir => '/var/cache/yumrepos/yumrepo',
    enable_cron    => false, #optional cron job to generate new rpms every 10 minutes
  }

  package { 'httpd':
    ensure => installed,
  }

  httpauth { 'repouser':
    ensure    => present,
    file      => '/usr/local/nagios/etc/htpasswd.users',
    password  => 'some-long-password',
    mechanism => basic,
    require   => Package['httpd'],
  }

  file { '/usr/local/nagios/etc/htpasswd.users':
    ensure => file,
    owner  => 'nginx',
    mode   => '0644',
  }

  class{'nginx':
    manage_repo    => true,
    package_source => 'nginx-mainline',
  }

  nginx::resource::vhost{"$::fqdn":
    www_root             => '/var/yumrepos/yumrepo',
    index_files          => [],
    autoindex            => 'on',
    rewrite_to_https     => true,
    ssl                  => true,
    auth_basic           => 'true',
    auth_basic_user_file => '/usr/local/nagios/etc/htpasswd.users',
    ssl_cert             => "/etc/puppetlabs/puppet/ssl/public_keys/$::fqdn.pem",
    ssl_key              => "/etc/puppetlabs/puppet/ssl/private_keys/$::fqdn.pem",
    vhost_cfg_prepend    => {
      'default_type'     => 'text/html',
    }
  }

}

For the above code to work, we need the required modules:

mod 'palli/createrepo', '1.1.0'
mod "puppet/nginx", "0.4.0"
mod "jamtur01/httpauth", "0.0.3"

We can then use the following declaration on our nodes to use this repository.

yumrepo {'private-repo':
  descr           => 'My Private Repo - x86_64',
  baseurl         => 'https://repouser:some-long-password@repo.example.com/',
  enabled         => 'true',
  gpgcheck        => 'false',
  metadata_expire => '1',
}

You now have a fully functional private repository – deploy your awesome software.

Website protection

There are several factors that go into securing a web application. Most are second nature to seasoned system administrators, but it is still too common to talk to someone who does not know how to properly secure a web application. Here is the common checklist I go through when I determine if a website is secured.

  • Is it using a firewall?
  • Am I using unique passwords that are over 20 characters?
  • Are passwords required to alter data?
  • Is my codebase up to date?
  • Are the only public facing ports HTTP and HTTPS?
  • Do I protect data in transit from the user to my site by enforcing HTTPS?
  • Do I protect data from my website to the database with SSL?
  • Is my database only accessible to my application?
  • Do I have my database and application on different servers?
  • Can a malicious user drop/delete/alter data from my database from a form/switch/button that is publicly accessible on my website or do they need to login to perform that operation?
  • Do I have separate connections and users to the database for writing and reading data?
  • Do I rate limit connections via web application firewall or utility like fail2ban?
  • Am I reading and blocking malicious inputs via web application firewall or mod_security?
  • Can anyone brute force a login or am I blocking it after 5 tries?

GlusterFS overview

GlusterFS is a distributed file system. Think of it as a replacement of traditional file storage (a single NFS/samba server), an alternative to Microsoft’s DFS, or a modern implementation of SAN. It really shines when you have multiple locations and need a file server which must have the same data and be continually in sync. It is also superb for virtual machine disks as they will then become highly available.

You can use GlusterFS in a replica, distributed, and distributed-replica models. Replica is where a copy of file a is located on all GlusterFS hosts. Distributed is where file a is on some hosts and file b is on the other hosts. Distributed-replica is a combination of both – in other words a subset of two distributed hosts in a parent of replicas.

To get started with GlusterFS, all you need is commodity hardware. Nothing has to match – not even the harddrive space. GlusterFS will configure the storage allocation pool automatically. I do recommend at least a 1GB NIC connection and a large internet pipe between locations. Partitioning your system appropriately must also be considered – have a separate mount for /var/log and /data. Keeping /data as the location of your shares makes adding and removing nodes consistent with the documentation.

You need at least a multiple of 2 GlusterFS hosts to experience replica, distributed (minimum of 2 hosts), and (minimum of 4) distributed-replica. If you plan on serving Virtual machines off of the GlusterFS volume, multiples of 3 are recommended. Clusters can also be geographically bound so that if one node fails, your clients will connect to another gluster server in that region rather than just any gluster node.

The quick start documentation goes over setting up two nodes, pairing them together, connecting via the GlusterFS protocol on your client, and creating 100 files. In total, this is about 6 commands.

For managing a large cluster of GlusterFS servers, one may want to take a look at heketi which manages the lifecycle of GlusterFS. Facebook also developed a tool called AntFarm, but it is currently closed source.

Repercussions from a 1.1 Tbsp DDoS

In case you missed it, the largest recorded Direct Denial of Service (DDoS) occurred. While under DDoS, a victim’s server (or servers) is under high load and cannot complete all requests that are requested by it. Basically, a DDoS victim is someone the attacker wants silenced on the internet. In order to send a DDoS of that magnitude, the attacker has to have control over many computers – a botnet. It is believed that this attack originated from over 150,000 computers in the IoT category (smart TVs, refrigerators, thermostats, etc.). Due to their poor default security, the IoT devices are easy targets for hackers who intend on adding them to their botnets. A recent article on Ars Technica points out the current issues with IoT and Linux kernel security, but with most articles of this nature, provides no clear cut solution to the problem we are experiencing. Below are my thoughts to this current situation and how it may be resolved.

We need a governing body to issue a seal of approval for IoT and anything that is compiled with the Linux kernel. Then we, as consumers, must use, buy, and encourage others to buy from the companies that have this seal. The governing body should ensure each company seeking the seal comply with the following criteria:

  1. Every new device created and sent to market has a minimum of 5 years worth of bi-monthly security patches and updates since the day of release to the public.
  2. In the event the company goes bankrupt, dissolves, or cannot support any older product they have released in the past 5 years, the company must provide schematics, instructions, or software that open source enthusiasts can recreate, patch, or upgrade the legacy product.
  3. No known vulnerability must be willingly left unpatched.
  4. When a CVE is identified on a company’s product, a test case must be created and run on that code base for every future release.
  5. A notification service must be in place when new updates are released and must be available in RSS or email form.
  6. Automatic updates should occur over HTTPS
  7. Backdoors, admin terminals, etc. should require a physical connector be applied on the device in order to grant access.

    For a potential company to get this approval, it may seem like an arduous task to get all the controls in place; however, by applying DevOps methodologies, these tasks can be a simple feat. This would require the governing body to not only enforce the list, but also have the training available to comply to this list. For this reason, I suggest the Linux Foundation to become this governing body and issue out seals of approval.

    First puppet module published

    I completed my first public module for puppet and submitted it to the puppet forge. It seems too simple to compile into a build and submit it to the forge; however, I made it public for these reasons:

    1. I needed experience with puppet code testing. This helped me at the most basic level.
    2. I felt like someone else could benefit from the code – even if it is one person.
    3. I wanted to do it.

    Still, the code seems too juvenile to be submitted to the forge. All it does is take the hostname of a Digital Ocean droplet and submit its IP address as a new DNS record inside of Digital Ocean DNS. The code is located here.

    I almost want to follow up with this and develop my duplicity module into reusable code for the community.

    Signs you are doing IT wrong

    1. You still use FTP
    2. You use SFTP
    3. You have a single server hosting 1 website, MySQL, and PHP. It has 4+ GB of RAM and you only have ~2,000 visitors a day.
    4. You login via root
    5. You don’t use version control
    6. You use a control panel for servers which you have SSH access.
    7. It takes you over an hour to migrate 1 website
    8. Your DNS TTL records are over 10 minutes
    9. Your SQL server is not accessible over SSL/TLS
    10. You use mod_php instead of reverse proxying to php-fpm
    11. You develop for the web on Windows
    12. You chmod 777
    13. You use modules/plugins that require chmod 777
    14. You have no backups
    15. You host multiple websites on one server (internal-only websites excluded)
    16. You SSH with passwords
    17. You reuse passwords
    18. You don’t read books
    19. You don’t attend conferences
    20. You attend more than 6 conferences a year
    21. You use skype for communication
    22. You make a separate mobile site
    23. You add more RAM to fix your memory leaks

    Iced coffee is the best

    I am not a very big fan of hot drinks, but I enjoy drinking a cup/glass/thermos/pot/gallon of coffee. I especially drink it more when my taste buds dance around and say, “Wow! That was some good, quality coffee!” A few weeks ago I set out to find a better way to make my favorite drink – iced coffee. In my opinion, the best method of procuring coffee is in whole bean form. I tend to buy a brand that is roasted in my region – supporting the local economy – that also tastes good. I store the whole bean bag in my freezer and the grounded bean in a small coffee can in my refrigerator.

    At first, I tried pouring hot coffee over Frozen coffee cubes, then added my refrigerated creamer. This lasted for a few weeks, but I couldn’t notice a huge difference in taste between water iced cubes and coffee iced cubes.

    Secondly, I tried cold brewing coffee – placing ground coffee beans in cold water into the refrigerator overnight. This only resulted in weak, flavorless coffee.

    Next, I tried hot brewing coffee, pouring it into a container, and letting it sit in the refrigerator overnight. This seems to be the best option so far. I still get to keep my 1.5 tbsp ratio for coffee beans and resulting liquid. The iced cubes do not melt when the coffee is poured over them. I think I will stick to this option for now.

    Provisioning VMs with cloud init

    One of the easiest ways to deploy a virtual machine in oVirt is first to install the OS then turn it into a template. This will allow you to copy that template to deploy new instances. One mundane task after a new template is copied to a new instance is logging in, changing the IP, setting the hostname, setting up Puppet, running puppet, etc. cloud-init is the tool designed to fix that mundane task process by allowing those steps to be automated. oVirt/RHEV (as well as OpenStack, AWS, and others) allow you to pass in user data which is then supplied to cloud-init after the template is copied over and turned on. This allows for scripting on the new VM – easing deployment.

    For my environment, I wanted a CentOS 7 template. To have that, I must first install CentOS on a new VM and seal it (Windows calls this Sysprep). Before I seal it, I must install cloud-init and any other tools I might use for deployment – such as puppet. Here are the steps to obtain just that:

    Continue reading Provisioning VMs with cloud init

    Securing PWM

    In last week’s post we set up PWM insecurely. In this post, we are going to secure it down and install mysql to store the reset questions. This guide assumes you have this CentOS 7 server publicly accessible with ports 80 and 443 available to the entire world. First, we will need to install mysql, set up a database, and add a user to that database. To do that, we need to edit our manifest.pp and append the following:

     class { '::mysql::server':
         root_password           => 'My4cc0unt$$password!',
         remove_default_accounts => true,
         package_name            => 'mariadb-server',
         package_ensure          => 'installed',
         service_name            => 'mariadb',
     }
    
     mysql::db { 'pwm':
         user     => 'pwm',
         password => 'pwm_passworD2!', # Can't do a password hash here :(
     }
    
     class { 'mysql::bindings':
         java_enable => true,
     }
    
    file { '/opt/tomcat8/pwm/lib/mysql-connector-java.jar':
         ensure  => link,
         target  => '/usr/share/java/mysql-connector-java.jar',
         require => Class['mysql::bindings']
    }

    We will also need to install additional modules: Continue reading Securing PWM

    Password management portal for end users

    We in IT have heard it often, the #1 request coming into help desk ticket systems is password resets, account lockouts, and the like. PWM is a password reset web application written in Java for use with LDAP directories. You can configure it to work with Active Directory, OpenLDAP, FreeIPA, and others. There are already a handful of good tutorials on how to set up PWM (I think of this one in particular); however, I want to demonstrate the puppet apply command in this tutorial.

    Prerequisites

    This guide assumes you have an Active Directory server with TLS set up (to change passwords) which is beyond the scope of this post. It also assumes you have a CentOS 7 instance which can communicate to the Active Directory server. It also assumes this is in an environment without a puppet master/server. The end manifest can be uploaded to a master and used that way.

    Continue reading Password management portal for end users

    Avoiding Catastrophic Failure

    You may have already heard the news about Delta Airlines catastrophic failure. Ars Technica reports the true cause of the failure – routine maintenance of the power generators. While it may be a little presumptuous or high on the bragging scale to have only one datacenter to house your entire infrastructure – this is not the best method. The blame is often placed on the IT personnel when computer systems go down, but in this case the error is shared. There was a maintenance individual who did not spot the potential of a fire, there is the building planning committee that placed the power sources too close together, there is the IT budgeting team that did not have an off-site solution, and there is the CTO misinformed on the infrastructure needs of a worldwide company. A catastrophic failure is anything that damages a company’s reputation.

    I can understand the single point of failure – it is often found in SMB/non-profit environments. The single point of failure happening is marginal at best. This causes it to be overlooked many times over as some will hope it never comes to encountering that scenario. Budgetary constraints are often the first road block, the second being time to implement, the third being the internal security practices of customer data, and the fourth being the time to restore after a catastrophic failure is less than 24 hours – these also minimizes the single point of failure in our minds. We so often minimize the single point of failure to where it loses its place as #1 concern to #100 on “do someday task.”

    We live in the best computer age right now. Catastrophic failures can be avoided. Here are a few ways to prevent catastrophic failures.

    Continue reading Avoiding Catastrophic Failure