Category Archives: Linux

Adding a user to k8s RBAC

In order to add a user to a kubernetes cluster, we will need several things: kubectl, CA.crt and CA.key (found in your head node’s /etc/kubernetes/pki folder), and openssl.

First, create a private key for the new user. In this example, we will name the file employee.key:

openssl genrsa -out employee.key 2048

Next, we will need to create a certificate sign request – employee.csr – using the private key we just created (employee.key in this example). Make sure to specify your username and group in the -subj section (CN is for the username and O for the group).

openssl req -new -key employee.key -out employee.csr -subj "/CN=username/O=developer"

Generate the final certificate employee.crt by approving the certificate sign request, employee.csr, you made earlier. In this example, the certificate will be valid for 90 days.

openssl x509 -req -in employee.csr -CA CA.crt -CAkey CA.key -CAcreateserial -out employee.crt -days 90

Give employee.crt, employee.key, and CA.crt to the new employee and have the employee follow the below steps.

# Set up the cluster
$ kubectl config set-cluster k8s.domain.tld --server https://api.k8s.domain.tld --certificate-authority /path/to/CA.crt --embed-certs=true

# Set up the credentials (a.k.a login information)
$ kubectl config set-credentials <name> --client-certificate=/path/to/cert.crt --client-key=/path/to/cert.key --embed-certs=true

# bind login to server
$ kubectl config set-context k8s.domain.tld --cluster= k8s.domain.tld --user=<name>
# Optional: append `--namespace=<namespace>` to the command to set a default namespace.

Note: You may move the certificates to a safe location since the commands included --embed-certs=true. This saved the certs in base64 format in the kubernetes config.

Sometimes I post to my blog so I remember how to do a particular thing. This is one of those times.

Reusable containers with confd

I recently had the need to populate a file in a docker container based upon whether or not the container is in production or development. I eventually came across confd which let me populate data in files based upon particular environment variables. While confd excels with distributed key value stores, my needs (and infrastructure) is at a much simpler level.

Confd requires a few folders to store a toml (/etc/confd/conf.d/) and a template file (/etc/confd/templates/). When confd runs, it will look at the contents of each toml file in the conf.d directory and process them according to their instructions.

In my repository example, I am wanting a container to say hello to me when it senses a NAME environment variable and print out the current datetime. If no environment variable is set, only the datetime is printed out. To do this, I must create the toml file to look like this:

[template]
src = "echo.tmpl"
dest = "/echo"

This file is instructing confd to generate the echo file, place it in the root (/) and use /etc/confd/templates/echo.tmpl as the contents.

When we are building the container, we must include these configuration files and ensure confd is ran to generate the destination file. My example Dockerfile is doing just that by including all of the files in the container and running the docker-entrypoint script which is basically running confd and the newly generated file.

 andrew@wipplerxps > ~/git_repos/confd $  docker build -t blog-confd .
Sending build context to Docker daemon 57.34 kB
Step 1/9 : FROM centos:7.4.1708
 ---> 5076a7d1a386
Step 2/9 : LABEL maintainer "andrew.wippler@gmail.com"
 ---> Using cache
 ---> d712b31f7449
Step 3/9 : RUN mkdir -p /etc/confd/{conf.d,templates}
 ---> Running in f340bdcdf973
 ---> 1f0faa9b962f
Removing intermediate container f340bdcdf973
Step 4/9 : COPY docker/confd/ /etc/confd/
 ---> fb16dffc63ac
Removing intermediate container 133128cb7fc1
Step 5/9 : ADD https://github.com/kelseyhightower/confd/releases/download/v0.14.0/confd-0.14.0-linux-amd64 /usr/local/bin/confd
Downloading 17.61 MB/17.61 MB
 ---> a62b388274e6
Removing intermediate container 3f9ec343a5ab
Step 6/9 : RUN chmod +x /usr/local/bin/confd
 ---> Running in 1489dd02ea45
 ---> ab99a5fc5f95
Removing intermediate container 1489dd02ea45
Step 7/9 : COPY docker/docker-entrypoint.sh /var/local/
 ---> 16906971c8ef
Removing intermediate container 7a17a8e17e22
Step 8/9 : RUN chmod a+x /var/local/docker-entrypoint.sh
 ---> Running in 1562a6d06432
 ---> f963372159b1
Removing intermediate container 1562a6d06432
Step 9/9 : ENTRYPOINT /var/local/docker-entrypoint.sh
 ---> Running in 1b7e12c38b4c
 ---> f7d260597e0a
Removing intermediate container 1b7e12c38b4c
Successfully built f7d260597e0a
 andrew@wipplerxps > ~/git_repos/confd $  docker run blog-confd
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Backend set to env
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Starting confd
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Backend source(s) set to 
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Target config /echo out of sync
2017-11-28T20:05:24Z 0931113b25f4 /usr/local/bin/confd[7]: INFO Target config /echo has been updated
The current time is: Tue Nov 28 20:05:24 UTC 2017
 andrew@wipplerxps > ~/git_repos/confd $  docker run -e NAME="Andrew Wippler" blog-confd
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Backend set to env
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Starting confd
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Backend source(s) set to 
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Target config /echo out of sync
2017-11-28T20:05:52Z 223f28e8d18f /usr/local/bin/confd[7]: INFO Target config /echo has been updated
Hello Andrew Wippler
The current time is: Tue Nov 28 20:05:34 UTC 2017
 andrew@wipplerxps > ~/git_repos/confd $

While it is fun to say hello to yourself once in a while, I am using confd to modify an nginx.conf. When I pass in the SSL environment variable, nginx will listen on port 443 with a self signed cert and forward all HTTP traffic to HTTPS. Obviously in production, I want to use a real SSL cert. Using confd allows me to have the same docker container in development and production – the only difference being a configuration change.

Autosign Puppet certificates on AWS

Let’s face it, Puppet’s method of certificates is a pain and huge administration overkill if done manually. Thankfully, puppet has designed several methods of auto-signing certificates. One of which is via crafting a special certificate signing request and verifying the certificate signing request is genuine.

On the puppet master

Apply the following code on your puppet master. This will set up the autosign script which will verify your custom certificate signing request. If the CSR is genuine, the puppet master will sign the certificate.

  service { 'puppetserver':
    ensure => running,
    enable => true,
  }

# The file must have execute permissions
# The master will trigger this as `/etc/puppetlabs/puppet/autosign.sh FQDN`
  file { '/etc/puppetlabs/puppet/autosign.sh':
    ensure  => file,
    mode    => '0750',
    owner   => 'puppet',
    group   => 'puppet',
    content => '#!/bin/bash
HOST=$1
openssl req -noout -text -in "/etc/puppetlabs/puppet/ssl/ca/requests/$HOST.pem" | grep pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa',
  }

# This sets up the required ini setting and restarts the puppet master service
  ini_setting {'autosign nodes':
    ensure  => present,
    path    => '/etc/puppetlabs/puppet/puppet.conf',
    section => 'master',
    setting => 'autosign',
    value   => '/etc/puppetlabs/puppet/autosign.sh',
    notify  => Service['puppetserver'],
    require => File['/etc/puppetlabs/puppet/autosign.sh']
  }

On the agents

With our puppet master ready to go, we need to set up our agents to generate the custom certificate request. This can be done by editing /etc/puppetlabs/puppet/csr_attributes.yaml before running puppet with the following content:

custom_attributes:
    1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa
extension_requests:
    pp_instance_id: $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
    pp_image_name:  $(curl -s http://169.254.169.254/latest/meta-data/ami-id)

Note: The 1.2.840.113549.1.9.7 value must match the item you are grepping for in the autosigning request. This specific value in the certificate is reserved for purposes such as this.

Execution

With everything in place, the way to execute this successfully is to pass in the below as the userdata script when creating an EC2 instance:

#!/bin/sh
if [ ! -d /etc/puppetlabs/puppet ]; then
   mkdir /etc/puppetlabs/puppet
fi
cat > /etc/puppetlabs/puppet/csr_attributes.yaml << YAML
custom_attributes:
    1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa
extension_requests:
    pp_instance_id: $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
    pp_image_name:  $(curl -s http://169.254.169.254/latest/meta-data/ami-id)
YAML

An alternative method is to create a custom AMI (especially for auto-scaling groups). I use the below puppet code to create my golden AMI.

  cron { 'run aws_cert at reboot':
    command => '/aws_cert.sh',
    user    => 'root',
    special => 'reboot',
    require => File['/aws_cert.sh'],
  }

  file { '/aws_cert.sh':
    ensure  => file,
    mode    => '0755',
    content => '#!/bin/sh
if [ ! -d /etc/puppetlabs/puppet ]; then
   mkdir /etc/puppetlabs/puppet
fi
cat > /etc/puppetlabs/puppet/csr_attributes.yaml << YAML 
custom_attributes: 
  1.2.840.113549.1.9.7: pi0jzq9qmabtnTa8KfkBs2z5rQZ3vZsa 
extension_requests: 
  pp_instance_id: $(curl -s http://169.254.169.254/latest/meta-data/instance-id) 
  pp_image_name: $(curl -s http://169.254.169.254/latest/meta-data/ami-id) 
YAML 

export CERTNAME="aws-node_name-`date +%s`" 

/opt/puppetlabs/bin/puppet apply -e "ini_setting {\"certname\": \ ensure => present, \
  path => \"/etc/puppetlabs/puppet/puppet.conf\", \
  section => \"main\", \
  setting => \"certname\", \
  value   => $CERTNAME, \
  }"

/opt/puppetlabs/bin/puppet agent -t -w 5',
  }

Moving to Desktop GNU/Linux from Windows/Mac

There are many curious individuals who tinker with GNU/Linux as a Server OS and want to experience what it is like as a Desktop OS. The switch is often hindered by two obstacles:

  1. Some daily use programs are not available. (i.e. Photoshop, iTunes, etc.)
  2. The unknown of what to do if something goes wrong or what do I do to get my 3d graphics driver installed and working.

While these are valid reasons and definitely show stoppers for some, others can safely migrate to GNU/Linux.

The obstacle of programs

I like Krita as an alternative to Photoshop. The menu options are nearly the same and I do not have to install a silly theme (like I have to do in Gimp) or re-learn photo editing just to recognize where everything is at. I have successfully installed Photoshop CS4 with wine without any issues, but Krita is more featured than CS4. Darktable is also a good alternative to Photoshop RAW/bridge.

Rhythmbox connects to iPhones/iPods the same way as iTunes does, but without the store. iTunes does run on a recent version of wine quite well. Some might also want to check out Clementine.

Most every program has an alternative. Alternatives can be found via alternativeto.net or software recommendations on StackExchange.

The unknown obstacles

To use GNU/Linux successfully as the primary Desktop OS, in my opinion, one must have a desktop with worthy hardware. I consider myself an AMD guy. I like the price for performance and I rarely do CPU intensive tasks on my desktop. When AMD bought ATI, I was also happy as ATI was my favorite graphics card. Unfortunately, most Desktop GNU/Linux users are developers and need that extra performance. They have desktop workstations that have Nvidia graphics cards in them with Intel CPUs. You will often find that Desktop GNU/Linux performs better, is easier to use, and has more tutorials for Nvidia graphics cards and how to get them working.

Captive Portal Overview

I originally authored this on Aug 16, 2016 at http://unix.stackexchange.com. Considering my tutorial did not include an overview, I thought I would re-post it on my blog.


To make a captive portal appear, you need to stop all internet traffic and provide a 302 redirectto the client’s browser. To do this, you need to have a firewall (like iptables) redirect all traffic to a webserver (like nginxapache, etc) where the webserver responds with a 302 redirect to the url of your login page.

I have written a lengthy article on how to do this with a Raspberry Pi. It basically boils down to the iptables block/redirect to webserver:

iptables -t nat -A wlan0_Unknown -p tcp --dport 80 -j DNAT --to-destination 192.168.24.1

and then the webserver (nginx) redirecting to the login page:

# For iOS
if ($http_user_agent ~* (CaptiveNetworkSupport) ) {
    return 302 http://hotspot.localnet/hotspot.html;
}

# For others
location / {
    return 302 http://hotspot.localnet/;
}

iOS has to be difficult in that it needs the WISP settings. hotspot.html contents are as follows:

<!--
<?xml version="1.0" encoding="UTF-8"?>
<WISPAccessGatewayParam xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.wballiance.net/wispr_2_0.xsd">
<Redirect>
<MessageType>100</MessageType>
<ResponseCode>0</ResponseCode>
<VersionHigh>2.0</VersionHigh>
<VersionLow>1.0</VersionLow>
<AccessProcedure>1.0</AccessProcedure>
<AccessLocation>Andrew Wippler is awesome</AccessLocation>
<LocationName>MyOpenAP</LocationName>
<LoginURL>http://hotspot.localnet/</LoginURL>
</Redirect>
</WISPAccessGatewayParam>
-->

Captive Portal Restaurant Menu

I have been contacted several times in regards to my captive portal post. In India there seems to be a surge in popularity for restaurants to have an open WiFi that prompts a user to open up a menu/splash page. The caveat being the legal issues encountered when providing free, open wireless internet. In order to avoid legal issues, the device that broadcasts must be disconnected from the internet. Although I am curious if just blocking internet access for those connecting is enough. 

It seems like an interesting issue to tackle, but I think creating something out of a WiFi captive portal would be like hammering a square peg through a round hole. It might work (if given enough time and effort), but in the end, it is probably not the right tool for the job.

While writing this post, I was reminded how Google appears to be tackling this problem. On my Android phone, I let the NSA spy on my whereabouts by enabling location services. It also lets my wife pinpoint where I am physically located. With it enabled, I can visit most shops in my area and get a Google maps prompt with the business information, reviews, and a few pictures. (Side note: if you appear in court over a childish/dumb action, you validate the judge’s decision when you post a negative review of the court house. Also please do not butcher the English language when trying to review places.)

Utilizing GPS location as well as having an app that provides the information seems like the best route to go in this circumstance. An alternative would be to have an app with WiFi credentials hardcoded in, listen for when a WiFi connection is made, check to see if it matches a predefined SSID, and attempt to communicate with a local app server to process data. Of course doing something like that is outside the scope of my tutorials.

Easy unix epoch timestamps from CLI

While working on various projects and ultimately the need for a Unix timestamp for expiring swift objects in OpenStack, I needed a quick way to convert past, present, and future timestamps to the Unix epoch. Traditionally, I went to google, searched for a Unix timestamp converter, and retrieved my seconds that way. Unfortunately in exams, you are not allowed to visit external websites.

If you know how to read documentation, you will already know that the date command has this feature already built in. An excerpt from the docs is as follows:

 ...
       Show the local time for 9AM next Friday on the west coast of the US

              $ date --date='TZ="America/Los_Angeles" 09:00 next Fri'

DATE STRING
       The  --date=STRING  is  a mostly free format human readable date string
       such as "Sun, 29 Feb 2004 16:21:42 -0800" or "2004-02-29  16:21:42"  or
       even  "next Thursday".  A date string may contain items indicating cal‐
       endar date, time of day, time zone, day of week, relative  time,  rela‐
       tive date, and numbers.  An empty string indicates the beginning of the
       day.  The date string format is more complex than is easily  documented
       here but is fully described in the info documentation.
...

Further reading of the docs will point you in specifically formatting a return string by doing a date +%s. So when the time comes to expire an object from swift at 17:00 next Friday, you can do something like:

swift post container file -H 'X-Delete-On: `date +%s --date="17:00 next Friday"`'

OpenStack PS1 snippet

I have been studying for my OpenStack certification test (the COA) which is scheduled next week. One thing that was painful to keep track of was the user I was using to interface with OpenStack as the rc file you download from OpenStack does not update your PS1 prompt. I came up with the following solution and placed it in my ~/.bashrc:


function parse_os_user() {
    if [ ! "${OS_USERNAME}" == "" ]
    then
        echo "(${OS_USERNAME})"
    else
        echo ""
    fi
}

PS1='\u@\h \w `parse_os_user` \$ '

OpenStack certification

On Dec 20th, I am scheduled to take my COA exam. From the exam requirements page, it appears to be a somewhat moderately difficult exam. The few points I need work on are heat templates and swift object administration. A few things I know about the exam are what are publicly available via YouTube videos of the OpenStack summit sessions.

One of my troubles of studying for exams is creating content to test myself on the objectives of the exam. I look at the requirements and say to myself, “I know that,” and nothing gets written for that aspect. One thing I have done in the past is to search Github for exam prep questions. One I have found for OpenStack is AJNOURI/COA. He also made a nifty website for his test prep questions.

A key aspect that has helped me pass all of my open book exams is to recall the locations of my troubled areas. Looking at the docs/reading the manual has often come a best practice of mine. Most of the time, exam questions are covered in the docs as the exams expect you to have read them.

Using Puppet to host a private RPM repository

A repository is a place where files are stored, indexed, and available through a package manager to anyone who has the repository information. With rpm based systems, a repository is created with a tool called createrepo. Most of the time, publicly available repositories already offer the packages your server needs. When you have a custom application you want to deploy (or even rebuild an existing application with your patches), it is best to distribute that package with a repository rather than a file share or some other means. Often a folder structure is created so that differing client OS versions can connect to the same repository and access versions compiled to that specific release. In my example below, I am not creating this folder structure as I am only serving one major release – Centos 7 – and the packages I am generating are website directories which are just a collection of portable code.

A private repository is not a tricky feat – all you have to do is serve the repository via https and require http basic authentication. You then configure the clients to connect to the repository with the basic authentication in the URL string (i.e. baseurl=https://user:pass@repo.example.com/). The HTTPS protocol is not required to serve a repository, but it does prevent network snoopers from seeing your repository credentials.

Now that we know what is needed for a private repository, we can then define it in our puppet code.

node 'repo.example.com' {

  file { '/var/yumrepos':
    ensure => directory,
  }

  createrepo { 'yumrepo':
    repository_dir => '/var/yumrepos/yumrepo',
    repo_cache_dir => '/var/cache/yumrepos/yumrepo',
    enable_cron    => false, #optional cron job to generate new rpms every 10 minutes
  }

  package { 'httpd':
    ensure => installed,
  }

  httpauth { 'repouser':
    ensure    => present,
    file      => '/usr/local/nagios/etc/htpasswd.users',
    password  => 'some-long-password',
    mechanism => basic,
    require   => Package['httpd'],
  }

  file { '/usr/local/nagios/etc/htpasswd.users':
    ensure => file,
    owner  => 'nginx',
    mode   => '0644',
  }

  class{'nginx':
    manage_repo    => true,
    package_source => 'nginx-mainline',
  }

  nginx::resource::vhost{"$::fqdn":
    www_root             => '/var/yumrepos/yumrepo',
    index_files          => [],
    autoindex            => 'on',
    rewrite_to_https     => true,
    ssl                  => true,
    auth_basic           => 'true',
    auth_basic_user_file => '/usr/local/nagios/etc/htpasswd.users',
    ssl_cert             => "/etc/puppetlabs/puppet/ssl/public_keys/$::fqdn.pem",
    ssl_key              => "/etc/puppetlabs/puppet/ssl/private_keys/$::fqdn.pem",
    vhost_cfg_prepend    => {
      'default_type'     => 'text/html',
    }
  }

}

For the above code to work, we need the required modules:

mod 'palli/createrepo', '1.1.0'
mod "puppet/nginx", "0.4.0"
mod "jamtur01/httpauth", "0.0.3"

We can then use the following declaration on our nodes to use this repository.

yumrepo {'private-repo':
  descr           => 'My Private Repo - x86_64',
  baseurl         => 'https://repouser:some-long-password@repo.example.com/',
  enabled         => 'true',
  gpgcheck        => 'false',
  metadata_expire => '1',
}

You now have a fully functional private repository – deploy your awesome software.

Repercussions from a 1.1 Tbsp DDoS

In case you missed it, the largest recorded Direct Denial of Service (DDoS) occurred. While under DDoS, a victim’s server (or servers) is under high load and cannot complete all requests that are requested by it. Basically, a DDoS victim is someone the attacker wants silenced on the internet. In order to send a DDoS of that magnitude, the attacker has to have control over many computers – a botnet. It is believed that this attack originated from over 150,000 computers in the IoT category (smart TVs, refrigerators, thermostats, etc.). Due to their poor default security, the IoT devices are easy targets for hackers who intend on adding them to their botnets. A recent article on Ars Technica points out the current issues with IoT and Linux kernel security, but with most articles of this nature, provides no clear cut solution to the problem we are experiencing. Below are my thoughts to this current situation and how it may be resolved.

We need a governing body to issue a seal of approval for IoT and anything that is compiled with the Linux kernel. Then we, as consumers, must use, buy, and encourage others to buy from the companies that have this seal. The governing body should ensure each company seeking the seal comply with the following criteria:

  1. Every new device created and sent to market has a minimum of 5 years worth of bi-monthly security patches and updates since the day of release to the public.
  2. In the event the company goes bankrupt, dissolves, or cannot support any older product they have released in the past 5 years, the company must provide schematics, instructions, or software that open source enthusiasts can recreate, patch, or upgrade the legacy product.
  3. No known vulnerability must be willingly left unpatched.
  4. When a CVE is identified on a company’s product, a test case must be created and run on that code base for every future release.
  5. A notification service must be in place when new updates are released and must be available in RSS or email form.
  6. Automatic updates should occur over HTTPS
  7. Backdoors, admin terminals, etc. should require a physical connector be applied on the device in order to grant access.

    For a potential company to get this approval, it may seem like an arduous task to get all the controls in place; however, by applying DevOps methodologies, these tasks can be a simple feat. This would require the governing body to not only enforce the list, but also have the training available to comply to this list. For this reason, I suggest the Linux Foundation to become this governing body and issue out seals of approval.

    First puppet module published

    I completed my first public module for puppet and submitted it to the puppet forge. It seems too simple to compile into a build and submit it to the forge; however, I made it public for these reasons:

    1. I needed experience with puppet code testing. This helped me at the most basic level.
    2. I felt like someone else could benefit from the code – even if it is one person.
    3. I wanted to do it.

    Still, the code seems too juvenile to be submitted to the forge. All it does is take the hostname of a Digital Ocean droplet and submit its IP address as a new DNS record inside of Digital Ocean DNS. The code is located here.

    I almost want to follow up with this and develop my duplicity module into reusable code for the community.

    Provisioning VMs with cloud init

    One of the easiest ways to deploy a virtual machine in oVirt is first to install the OS then turn it into a template. This will allow you to copy that template to deploy new instances. One mundane task after a new template is copied to a new instance is logging in, changing the IP, setting the hostname, setting up Puppet, running puppet, etc. cloud-init is the tool designed to fix that mundane task process by allowing those steps to be automated. oVirt/RHEV (as well as OpenStack, AWS, and others) allow you to pass in user data which is then supplied to cloud-init after the template is copied over and turned on. This allows for scripting on the new VM – easing deployment.

    For my environment, I wanted a CentOS 7 template. To have that, I must first install CentOS on a new VM and seal it (Windows calls this Sysprep). Before I seal it, I must install cloud-init and any other tools I might use for deployment – such as puppet. Here are the steps to obtain just that:

    Continue reading Provisioning VMs with cloud init

    Securing PWM

    In last week’s post we set up PWM insecurely. In this post, we are going to secure it down and install mysql to store the reset questions. This guide assumes you have this CentOS 7 server publicly accessible with ports 80 and 443 available to the entire world. First, we will need to install mysql, set up a database, and add a user to that database. To do that, we need to edit our manifest.pp and append the following:

     class { '::mysql::server':
         root_password           => 'My4cc0unt$$password!',
         remove_default_accounts => true,
         package_name            => 'mariadb-server',
         package_ensure          => 'installed',
         service_name            => 'mariadb',
     }
    
     mysql::db { 'pwm':
         user     => 'pwm',
         password => 'pwm_passworD2!', # Can't do a password hash here :(
     }
    
     class { 'mysql::bindings':
         java_enable => true,
     }
    
    file { '/opt/tomcat8/pwm/lib/mysql-connector-java.jar':
         ensure  => link,
         target  => '/usr/share/java/mysql-connector-java.jar',
         require => Class['mysql::bindings']
    }

    We will also need to install additional modules: Continue reading Securing PWM

    Linux training on sale until 7/31/16

    The Linux Foundation is offering select courses at a discount until 7/31/16. Some offers are up to 55% off. You can also get an additional 10% off in check-out by using the code GSHOP. That brings the prices down to:

    $180 – For Essentials of System Administration (LFS201) or Linux Networking and Administration (LFS211)

    $315 – Essentials of System Administration AND Linux Networking and Administration

    $269.10 – Certified Linux SysAdmin or Certified Linux Engineer

    $495 – Certified Linux Rockstar

    You can find these prices by using their special system administrator appreciation sale page and using the checkout code GSHOP. If you were looking for a time to level up your Linux knowledge, now is the time to do it.