Apr 252016
 
PiVPN Logo

So I really applaud the efforts and progress by the EFF for the Let’s Encrypt initiative.  In this post-Snowden era I believe it is very important for users to take their privacy and security into their own hands whenever possible.  Let’s Encrypt allows anyone that is running a website to easily be able to serve that site over an encrypted channel.  If you aren’t a technical person you should be able to get a free cert from Let’s Encrypt using any integrations they provide.  This is a great boon for those who have their own sites and blogs but what about the people at home who don’t run their own site?  They use the internet and rely on the various sites to determine whether they are secure.  A frequent piece of the solution for this is to leverage a VPN (Virtual Private Network).  It will encrypt and tunnel your traffic from your client side through to the VPN Server side.  Correct, from the VPN server out to the internet, you will again be unencrypted if the site doesn’t offer HTTPS connections, however from your local location to the VPN server you have great security.  This is important because frequently people use the internet in locations they can’t and don’t control the security of.  Any wifi hotspot, public place, friends house, etc., you have no clue what could be on that network intercepting your data or worse.  If you setup a VPN Server at home, where you trust your local network, then no matter where you are, you can VPN into your home network and it is as if you were using the internet from your house.  In other words if you are sitting in a Starbucks, you can VPN into your home VPN server and now you have complete encrypted traffic from the unknown and unsecured Starbucks wifi, direct to your home, where it then goes to the site you visited.  Sadly, for most, configuring and managing your own VPN server is a task not easily accomplished.

This loops us back to a Let’s Encrypt parallel.  Where Let’s Encrypt took a task that was challenging for many and made it greatly more accessible, PiVPN does the same for installing and managing an OpenVPN server.  What is this PiVPN?  If you’ve searched for how to install openvpn then you may have found it is non-trivial.  PiVPN makes installing openvpn easy, quick and fun.  If you are technical enough to get a Jessie Lite image up and running on a Raspberry Pi, you are now technical enough to run your own VPN Server thanks to PiVPN.  Once you have successfully logged into your Raspberry Pi, to have a fully working and manageable OpenVPN server your install process is a one line command:

curl -L https://install.pivpn.io | bash

Yes that is it.  You can literally hit ‘Enter’ through the install, but even if you are more technical the install will let you choose many different customization options along the way.  Once it is install you can manage the configuration (OVPN files) you install on your clients with simple commands on your server:

‘pivpn add’ – This will add clients and takes one optional parameter,

‘pivpn add nopass’ – This will add a client certificate without a password.  Only recommended if you really need it.

‘pivpn list’ – This will list the clients

‘pivpn revoke’ – This will remove clients

All the code for this installer is available on Github where questions and contributions are welcome!

As a final note, before you go run off and play with PiVPN on your own, I understand that some people may want to encrypt traffic leaving their home.  It is one thing to be in a public, untrusted place and encrypt the traffic to your home, where it then goes out normally to the internet.  But what if you don’t trust your own ISP?  Now you want to encrypt the traffic even leaving your home, maybe to a VPN endpoint out on the internet.  Time to pay for a service?  NO.  I’ve made sure PiVPN will also work if you boot up a free-tier Amazon server running the latest Ubuntu 14.04 server image.  So simply go create an account on Amazon’s AWS infrastructure, boot up a free tier ubuntu server and run the PiVPN install command.  Now you have your own VPN server out on the internet just like a paid service.

Feb 012016
 

Those who care about their communications (individuals or enterprise), may at one point decide to look into encrypting email. Email is unfortunately a product of its past, designed for sending communications from one mailbox and delivering to another across the internet, yet during a time where encrypting that communication wasn’t even an afterthought. There have been some bolt on patches to secure email but really a nice new protocol is needed. Being stuck with what we have, you may have decided for one reason or another that S/MIME certificates are what you’d like to use to secure your email. Definitely a lot of people are concerned with the privacy of their email if you look at the section, “How important is it that your online information remain private?” in this article. I recently needed to ensure such certificates were also FIPS compliant. I had a hard time using the normal openssl binaries and ensuring I was using FIPS compliant commands to generate the certificates. So first we will compile an openssl binary in FIPS mode. This binary will error as soon as we run a command that is not FIPS compliant, ensuring our resultant certs are good. Then I’ll show how to generate the certs to be either self-signed by your own CA if you will use and trust among friends/family or to have signed by an Enterprise CA if you are a company with a trusted Enterprise CA and clients that trust it. Regardless, at the end you’ll have S/MIME certs you can use in your mail clients for secure communications.

h4. Building OpenSSL with FIPS Mode

I’m using Ubuntu 14.04.03 LTS, instructions may vary slightly on a different target system.
First download and extract the openssl source tarballs we will need (the below are the latest at time of this writing but always grab the latest stable releases)

wget http://www.openssl.org/source/openssl-1.0.2e.tar.gz
wget http://www.openssl.org/source/openssl-fips-2.0.11.tar.gz
tar xvzf openssl-fips-2.0.11.tar.gz
tar xvzf openssl-1.0.2e.tar.gz

You’ll probably need build-essential package which I had already installed so go ahead and `aptitude install build-essential`.
Next lets build the FIPS module our openssl will need:

cd openssl-fips-2.0.11/
./config
make
make install

Near the bottom of that output you should see something like installing to /usr/local/ssl/fips-2.0, we will need that directory in a bit to reference.
Now lets compile our own openssl, cd into the openssl-1.0.2e/ dir you extracted above

./config fips shared
make depend
make
make install

What we did here was tell our compiled openssl that we have a shared fips module to use. The output of the above should tell you “OpenSSL shared libraries have been installed in:
/usr/local/ssl”

So what you’ve done is you have your normal system openssl completely intact but now in /usr/local/ssl you have the one compiled with FIPS support.
You can check the version
openssl version will come from your system and output something like `OpenSSL 1.0.1f 6 Jan 2014`
whereas if you cd /usr/local/ssl/bin and run ./openssl version you’ll see our fips one `OpenSSL 1.0.2e-fips 3 Dec 2015`.
Great now lets export a couple variables so that our compiled openssl can get to the shared fips module:
export LD_LIBRARY_PATH=/usr/local/ssl/fips-2.0 && export OPENSSL_FIPS=1
One final test to prove this openssl will error on anything that is not FIPS compliant is we can try to get an MD5 hash of a file.
./openssl md5 /home/user/somefile
and you’ll get some error output like:
Error setting digest md5
140006545020576:error:060A80A3:digital envelope routines:FIPS_DIGESTINIT:disabled for fips:fips_md.c:180:

since md5 hash is not FIPS compliant.

h4. Creating the FIPS S/MIME Certs

Now that we have an openssl that will only allow us to run things that are FIPS compliant we can generate some S/MIME certs.
I’m going to number the steps to take here to create your certs with a comment after each numbered step describing what the step is doing. Where you see multiple of the same number, that means you chose the step you want based on your desired outcome (what options you want, what CA will be used, etc.).
1. ./openssl genrsa -out newkey.key 4096 – where newkey.key is the key and can be named anything you want, we are just generating a 4096 bit key.
2. ./openssl pkcs8 -v1 PBE-SHA1-3DES -topk8 -in newkey.key -out enc_newkey.key – this takes our normal key above and encodes it in pkcs8 format. This is a common format used but you have options, like my next command uses v2 which isn’t as widely accepted and there is also pkcs12. If you think you need to use some variant of a command I specify here you can get more information with running ./openssl pkcs12 /? and it’ll output your options.
2. ./openssl pkcs8 -v2 des3 -topk8 -in newkey.key -out enc_newkey.key Here is another variant of the above command if you wanted to use version2 of pkcs8, again chose one of these commands to run.
*With any of the step 2 commands, you will be asked for a password, please enter something from 4 to 1023 characters long and then provide it when asked in step 3 below.
3. ./openssl req -new -key enc_newkey.key -out new_request.csr – Now we take our new pkcs8 encoded key and generate a CSR (Certificate Signing Request) with it.
(see CSR Creation Info below for examples of fields)

Depending on how you want to have your certificate signed, use ONE of the Step 4’s below:
4. ./openssl x509 -req -days 3650 -in new_request.csr -signkey enc_newkey.key -out email.crt – This is the self-signed option
4. ./openssl x509 -req -days 3650 -in new_request.csr -signkey enc_newkey.key -CA enterprise_ca.cer -CAkey enterpriseprivatekeynopass.pem -set_serial 13 -out email.crt – This shows using a CA you have the cert and key for.

Using a Microsoft Enterprise CA Web UI:
4.

Click on Request a Certificate Link
Click on the advanced certificate request link
Paste contents of CSR into top box (DO NOT include the Beginning and Ending lines!!!)
Click submit
Download the Base 64 Encoded Certificate and Chain, name them yournameB64.cer and yournameB64.p7b

If you used the Web UI then this is your Step 5:
5. openssl pkcs12 -export -out yourname.pfx -inkey yournamekey.p7b -in yournameB64.cer -certfile enterprise_ca.cer
otherwise
5. ./openssl pkcs12 -export -descert -in email.crt -inkey enc_email.key -out email.pfx
This is exporting your new cert and key into a pfx file that is generally used to import into mail clients to support S/MIME.

h5. CSR Creation Info
Example answers:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
—–
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:South Dakota
Locality Name (eg, city) []:Sioux Falls
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Company Inc
Organizational Unit Name (eg, section) []:Development
Common Name (e.g. server FQDN or YOUR name) []:John Doe
Email Address []:john.doe@company.com

Please enter the following ‘extra’ attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

Jan 112016
 

Let’s Encrypt is a non-profit organization that is lowering the bar in getting people to encrypt their websites. They are lowering the bar in two ways, first by making it easy and api driven to obtain and renew the certificates and second by making it entirely free. Note, these are Domain Validation certs not Extended Validation or Organization Validation certs, those you still should buy from a reputable company like DigiCert.
Understanding the importance of encrypting the web is Step 1 here. Go google search to convince yourself of that and then come back to get some free certs on your websites.
This method of using let’s encrypt described below will probably get much easier for non-technical people as their software matures. However, I’m completely happy with it as it will grab the certs only, not mess with my nginx/apache configs, work with multiple vhosts, and renew every 60 days. The first thing someone may balk at when looking into the Let’s Encrypt certs is that they give out certs that expire in 90 days and recommend renewing every 60. This isn’t what most people are used to but once you have it configured you will do nothing to make this happen and they briefly review the decision to use ninety-day lifetimes on their certificates as well.

Installing Let’s Encrypt & Obtaining a Certificate

First you will want to git clone their repository so ensure you have git installed and clone it:

apt-get install git
git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt/

Later to ensure you are using the latest version you will just run a git pull within the letsencrypt directory to pull down their latest changes.

At this point you need to simply run ./letsencrypt-auto with the parameters you want to get your cert.  It’s very helpful to know that ./letsencrypt-auto --help all will give you all the options and parameters you can use.
In my case I have a site like vigilcode.com that has a separate vhost for blog.vigilcode.com and forum.vigilcode.com and is also available at www.vigilcode.com. You can specify all of these in one command.

./letsencrypt-auto certonly --webroot -w /var/www/vigilcode.com --email youremail@yourdomain.com -d vigilcode.com -d www.vigilcode.com -d blog.vigilcode.com -d forum.vigilcode.com

This will put the certs in directories under /etc/letsencrypt/live/vigilcode.com since vigilcode.com was the first domain parameter. If you have a completely different domain as a vhost as well then simply run another command for that site like:

./letsencrypt-auto certonly --webroot -w /var/www/anothersite.com --email youremail@yourdomain.com -d www.anothersite.com -d anothersite.com

and letsencrypt will put those certs into /etc/letsencrypt/live/www.anothersite.com/

An example of a simple nginx config for “anothersite.com” that would redirect any normal http traffic to https and point to these letsencrypt certs is

server {
    listen 80;
    listen [::]:80;
    server_name www.anothersite.com anothersite.com;
    return 301 https://$server_name$request_uri;
}

server {
    # SSL configuration
    #
    listen 443 ssl;
    listen [::]:443 ssl;
    #
    gzip off;
    #
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_session_timeout 5m;
    ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:!aNULL:!eNULL:!MD5:!EXP:!PSK:!SRP:!DSS;
    ssl_certificate       /etc/letsencrypt/live/www.anothersite.com/fullchain.pem;
    ssl_certificate_key   /etc/letsencrypt/live/www.anothersite.com/privkey.pem;

    root /var/www/anothersite.com;

    index index.html;

    server_name www.anothersite.com anothersite.com;

    if ($request_method !~ ^(GET|HEAD|POST)$ )
    {
            return 405;
    }

    location / {
        # First attempt to serve request as file, then
        # as directory, then fall back to displaying a 404.
        try_files $uri $uri/ =404;
    }
}

This config should give you an easy A in the Qualys SSL Labs Server Test which I strongly recommend you run against your site to ensure you always have an A. I assume if I turn on HSTS I’d get an A+ out of it but didn’t want to enable that yet.

With the above I did encounter Insecure Platform Warnings in the output. To solve I wanted to switch to use Pyopenssl and then modify the client to use it as well.

apt-get install python-pip
pip install pyopenssl ndg-httpsclient pyasn1

and then added 2 lines to client.py per this diff:

git diff acme/acme/client.py
diff --git a/acme/acme/client.py b/acme/acme/client.py
index 08d4767..9481a50 100644
--- a/acme/acme/client.py
+++ b/acme/acme/client.py
@@ -11,6 +11,7 @@ import OpenSSL
 import requests
 import sys
 import werkzeug
+import urllib3.contrib.pyopenssl

 from acme import errors
 from acme import jose
@@ -19,6 +20,7 @@ from acme import messages


 logger = logging.getLogger(__name__)
+urllib3.contrib.pyopenssl.inject_into_urllib3()

Configuring Automatic Renewals

I found a nice script in the letsencrypt forums that accomplished most of what I wanted. I made a few edits so also uploaded my modified version of it here in case the forum copy disappeared.

If you look at the script the basic renewal command is ${LEBIN} certonly --renew-by-default --config "${LECFG}" --domains "${DOMAINS}"
The only pieces that is missing from our original command is that we were using the webroot authentication method and specifying the webroot path. Since I have a different webroot for some different virtual hosts I want to put them in their own .ini file. So instead of specifying a static .ini file in his script I changed it to be a variable matching the cert name

Line 125: LECFG="/etc/letsencrypt/${CERT_NAME}.ini"

Now I’d create /etc/letsencrypt/www.anotherdomain.com.ini and /etc/letsencrypt/vigilcode.com.ini
and give it an email = line and then also specify the webroot and webroot path as in:

authenticator = webroot
webroot-path = /var/www/vigilcode.com

Then you can just pop this into a crontab with crontab -e

10 2 * * 6 /root/scripts/auto_le_renew.sh vigilcode.com
15 4 * * 6 /root/scripts/auto_le_renew.sh www.anotherdomain.com

These will run every Saturday morning so when I wake up I’d have the success or fail message once the renewal is triggered.

Dec 282015
 
Notes Tidbits

This post will hold little notes and tidbits that I want to ensure I don’t lose yet don’t require their own dedicated blog post to discuss.

Removing and cleaning up old linux kernels

I run a lot of ubuntu based linux servers and depending how they were partitioned out, over time, the unused and older linux kernels can chew up a ton of space. I used to just remove the old images and headers from the /boot partition and run update-grub2 to update the grub menu with what is left. That does free up space on the actual /boot partition but those unused kernels and images are still installed on your system. I’ve googled for the best way to remove these and found one command I’ve been using for a while now that works flawlessly and causes no issues as long as you follow one rule. The one rule: Before executing this command it is important to update your system to the latest kernel (aptitude update && aptitude full-upgrade), and then REBOOT to actually boot into it. Once safely on your latest kernel run this command to clean up every older version
dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge
It looks long and scary, but it really isn’t that bad once you break it down, which someone gratefully did in this post. That’s it, follow the one rule and run the one command and your server’s kernels will be nice and lean.

Nov 302011
 
Mastering SSH Keys

Welcome to the wonderful world of SSH keys! If you don’t yet share my enthusiasm you soon will! SSH keys are a perfect way for you to control access to your machine, whether that be a very secure way for only you to have access, locking down other authorized users and preventing their passwords from getting distributed or stolen, or even allowing access to scripts for very specific purposes. SSH keys accomplish all of it and more. First, let me just lay out how simple SSH keys are in case you aren’t clear and maybe get discouraged by the length of this post. The basics of it are, you have a host server create a private and public key pair. You can optionally provide a password for the keys. You copy the public key to servers you wish to log on to, while keeping private key secured and on your system. Then when you login to the system the private key is used to decrypt the public (also with optional password) and you get in. It is that simple at its core, there are just a lot of various configuration options you can use to accomplish certain features or functionality that may be more desirable for you. The keys are very secure as using this method alone prevents against man-in-middle attacks and eliminates a few other possible break-ins from password only authentication.
The most common way I see SSH keys being used is as a password-less easy login for an admin to use for his remote linux systems. Although this is better than using a password only system you still are vulnerable to someone obtaining your private key and using it for instant access to any system with the corresponding public key. I prefer to use them as a type of two factor authentication for the ssh logins where I need to provide the password to the key plus (of course) have the actual key. Regardless, let’s get a basic configuration going and start using ssh-keys. First on the server side you can follow a config I’ve used in a previous post but general settings to ensure:

  • Port 2992 – (I always recommend changing the default port)
  • Make sure both “RSAAuthentication” and “PubkeyAuthentication” are both set to yes
  • PermitRootLogin no
  • AuthorizedKeysFile %h/.ssh/authorized_keys
  • I’d also only allow users in a certain group to use ssh for further protection so “AllowGroups sshlogin” for example and add users who will ssh into this system into the sshlogin group.

Now on your machine, the computer you will be ssh’ing from, you need to create the key pair and copy the public key to the host(s). Generate the key pair as follows:
Create the .ssh directory in your linux home directory if it doesn’t exist already. Ensure correct permissions with chmod 0700 .ssh. Then create the actual keys and when prompted specify a pass-phrase to use (This pass-phrase is the optional part if you just want to authenticate with the keys alone but as I mentioned above I prefer supplying one so you have a two factor authentication going on). Keep in mind this can be a passphrase not just a password.
ssh-keygen -b 2048 -t rsa -f ~/.ssh/MainKey
The -b specifies the length of the key and the -t the type (which is rsa or dsa).
I’d use RSA 2048 or if you are the more paranoid type RSA 4096. I won’t delve into the RSA/DSA debate except to say that I’ve done a LOT of reading on the topic and I’m choosing RSA keys and with sensitive servers I’d go with RSA at 4096.
[Update, Dec 2015] :: DSA is being deprecated in OpenSSH so I now strongly recommend RSA over it. If you are running version 6.5 or newer of openssh I’d actually recommend using ed25519 keys over RSA and you can read this blog post for more details. Typically you can check your version with ssh -V command. Your command to generate this key would simply be
ssh-keygen -t ed25519 -f ~/.ssh/MainKey
Note for Mac OSX users, the version of ssh is too old for ed25519 unless you are on OSX 10.11, El Capitan. If you are on an older Mac OSX version simply google for homebrew install of openssh to upgrade your version and use that.
[End Update]
Once the keys are created scp the public key to the server but copy the public key into the .ssh directory named as “authorized_keys” (or wherever you specified in sshd_config the AuthorizedKeysFile). You can append multiple public keys to the authorized keys file just cat mykey.pub >> authorized_keys for each public key.

Now you may have noticed above when generating the key pair we passed the -f option. This way we can name our private key something other than the default. If you do not specify the -f the default will be id_rsa with the corresponding id_rsa.pub. So now to ssh into our system we will want to specify the identity file to use as:
ssh -i /home/user/.ssh/MainKey user@myserver.com -p2992
The reason I want to show the more complicated use case of specifying a particular identity file is that you may want to have different key pairs for servers that are yours versus your employers’ or any other separation that would be important to you. You can simplify the management of which keys are used for which hosts by specifying in the ~/.ssh/config file.
vi ~/.ssh/config

Add both host names and their identity file as follows:
Host server1.myserver.com
IdentityFile ~/home/user/.ssh/MainKey
Host server1.workserver.com
IdentityFile ~/home/employee/.ssh/id_rsa

That’s all there is to your basic key based login! Now lets briefly go over many of the popular use cases for these ssh-keys so you can see how powerful and helpful they can be for you.

First, as I mentioned earlier, I frequently see administrators using these without a password so they can quickly login to the servers they manage. The problem is you are now solely relying on the security of that private key for complete access to those systems. I will strongly recommend (yet again) putting a pass-phrase on your keys and then if you want a quick password-less login then simply configure ssh-agent to cache your pass-phrase on your system for that session. For details just search for “ssh-agent cache password” and you’ll find a few examples. In this way if your key is somehow stolen that person still has no access to the servers as they don’t know the password, yet you have the efficiency of not typing a pass-phrase once you set up and configure ssh-agent to cache it on your system. Win-win here.

Next, we can limit by IP or IP subnet where users with ssh-keys are able to login FROM. So say you are the server admin and you have other users you manage on the server. Maybe you (especially after reading this) have a requirement that they only login with ssh-keys. Being security conscious you might have a few concerns, not the least are:

  • Did they create a passphrase or leave blank?
  • Will they give their key and pass to someone else to use?
  • What if their key is stolen or compromised?

One thing you can do if you just control the server with the public key, is limit by IP or subnet where they can be coming from. In this way if the users key is stolen it can’t be used anyway if that person isn’t trying to login from the same IP (or subnet) as where the original user is from. You can either wait till they login the first time and see what IP they are coming from or ask them to send you the subnet or possible IP’s they might come from. Once you have this IP or list of IP’s just open the authorized_keys file on your server and add a “from=” line to the beginning of their public key as shown below:

[root@mainserver .ssh]# cat authorized_keys
from="10.20.30.*,172.16.31.*,192.168.1.*" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEArAkcHTOXZiDxcJEHNmrJRoM2HJE9Rq1uoiVHuTSjgl0THp0UFDNepnmCvk5bX22KzjAUa8PWnq1ZDw5Uf9/i6N/SiXittnxT0dFnZGj6RRep5Ae3AmOJbg0i69XL9o2zBJnYo2JVPXJkCDhSvVWokZUn5QjaJiGNigP9plA1He94Slkhn2jTxx1iehx9Vy/ojnxsJDqpTa8hX1GQK/b1jzvWdN3Qg+EhlxBDfKSA8u4uGsPP+6hXGgFLRluG/6yizj8LDF1LRWIKYaBvaLNJ+720sAI9O4miHyxY4n3ghBDbULPoLGz5a6bYRJ9pY9i0ySQEmXNSD+0u31+fAaGGpQ== user@sauronseye

So if you have this public key and it is supposed to be used by a user at Company A and that user gives it to someone else at Company B or it is compromised by evil haxor, then that key whether it had a password or not will not allow them access. Then with your normal audits you do (right?) you see failed logins from a foreign IP and realize the key was compromised. Server is still secure your world is safe from harm :).

The last and yet another very common way of using ssh-keys is for specialized scripts use. You have a script on one server and at a certain point in the script you’d like it to kick off a command or script on another server. How do you do this? Well you can create an ssh-key specific for this purpose. Create your keys just as above but name its identity for the job at hand so you don’t get confused.
ssh-keygen -t rsa -f ~/.ssh/login_audit
For an example lets say you have primary internal server and you want it to go out and gather a report of all the last logins from your remote servers. You could put a script like this on the remote server:

# Script: last_logins.sh
last -a > /home/user/last_logins.barney
scp /home/user/last_logins.barney fred@mainserver:.
# End Script

Then in the authorized_keys file on the remote server edit the public key just as we added the “from=” line but instead use “command=”:

command="/home/user/scripts/last_logins.sh",no-port-forwarding,no-X11-forwarding,no-agent-forwarding [rest of pub key...]

Now whenever you ssh in using that identity file the only thing it will do is run that command and exit. There are countless uses for this alone and being aware that you can use ssh-keys like this will help you realize WHEN you should use them like this.
There is a bunch more you can do with ssh and ssh-keys so use this as a starting point and take advantage of the tools at your disposal!

~~~~~BONUS TWO-FACTOR AUTHENTICATION SECTION~~~~~
We discussed above using ssh-keys as two-factor authentication to login to a server via ssh. Some of you may be aware that Google offers a two factor authentication to login to your Google services (gmail, apps, etc.). They accomplish this with an application you can install called Google Authenticator.
If you don’t use google authenticator and 2-factor authenication on your google account you should strongly consider it. Gmail accounts are prime targets for being compromised and a lot of people use that account for a wide array of things that would make the compromise that much more damaging. Read some of the links below for starters:
http://www.multitasked.net/2011/jun/27/hacked-gmail-google-account/
http://threatpost.com/en_us/blogs/new-password-not-enough-secure-hacked-e-mail-account-100410

Once you have google authenticator installed cause now you fear for the safety of your gmail… what else can you do with this thing? Well we can use it for two-factor authentication to login to our machines, much in the same way we used the ssh-keys.
I’ve found a great post that details how to do this and don’t forget to read the comments there as they include a lot of valuable information as well:
http://www.mnxsolutions.com/security/two-factor-ssh-with-google-authenticator.html

Linked also in that post is how to use a YubiKey to login with two factor authentication. These are all great and really secure methods to login and authenticate to your systems.

Hopefully you can realize the importance of two factor authentication to secure the things that really matter to you. There are countless hacks and exploits and more found every day. Trust me it is much easier to take the extra 10 minutes to set this up than the hours or days of recovery from a security breach. One needs to be proactive about security because if you are reactive then you are too late.

Aug 242011
 
Configure Secure FTP, With VSFTPD

So you need to setup an FTP server?  Are you sure?  The reason I ask is that FTP isn’t really a great option in many cases due to its inherent lack of security “consciousness” as I like to call it.  This can best be summed up from the File Transfer Protocol’s wikipedia page, excerpt here:

FTP was not designed to be a secure protocol—especially by today’s standards—and has many security weaknesses. In May 1999, the authors of RFC 2577 enumerated the following flaws:
Bounce attacks
Spoof attacks
Brute force attacks
Packet capture (sniffing)
Username protection
Port stealing

FTP was not designed to encrypt its traffic; all transmissions are in clear text, and user names, passwords, commands and data can be easily read by anyone able to perform packet capture (sniffing) on the network. This problem is common to many Internet Protocol specifications (such as SMTP, Telnet, POP and IMAP) designed prior to the creation of encryption mechanisms such as TLS or SSL. A common solution to this problem is use of the “secure”, TLS-protected versions of the insecure protocols (e.g. FTPS for FTP, TelnetS for Telnet, etc.) or selection of a different, more secure protocol that can handle the job, such as the SFTP/SCP tools included with most implementations of the Secure Shell protocol.

With that being said, there are still plenty of valid reasons for wanting/needing an FTP server. I won’t go over an FTPS configuration but you’d literally be an SSL cert and a couple config lines away from it by the end of this post. If you simply need a good way to transfer files to/from your server and don’t need other users involved then that is a clear case for taking a strong look at SFTP/SCP transfers instead.  If you are still reading this then you want an FTP server don’t you?  Here’s what we are setting up; an FTP server using VSFTPD which is the most secure FTP daemon one could probably use.  Tried and tested for years on very large sites, it is rarely, if ever, found to have an exploit of any kind.  Added to that, we will be configuring it with virtual users.  These are users solely for the FTP server and do NOT have a local account on your server.  So you know how FTP sends the user/pass in clear text?  Well no one will be able to ssh into your server because of that.  Worst case, they will break into a chrooted environment where they will only have access to the FTP files you have given to that user anyway.  If you’ve kept in mind FTP is clear text transfers then you don’t have confidential files there to be compromised any further.  Regardless, even that has never happened to me and I’ve been running VSFTPD for the past 10 years.  If you supplement it with a nice fail2ban configuration and further secure it with apparmor you’ll be in incredible shape.

Lets start building this great configuration.  First aptitude install vsftpd. This installs the core of vsftpd. Next we’ll want to aptitude install db4.6-util. This installs the utility we will use to store our virtual users’ user-names and passwords. Now find vsftpd.conf file, there is a good chance its right at /etc/vsftpd.conf. Depending on the distro and exact version you are using some of these paths might be off for you a tad but you should be able to figure that part out, just follow the framework here. Go ahead and backup the original vsftpd.conf file and then edit to have just the following:

# vsftpd.conf
listen=YES
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
xferlog_enable=YES
connect_from_port_20=YES
chroot_local_user=YES
secure_chroot_dir=/var/run/vsftpd
pam_service_name=vsftpd-virtual
rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
hide_ids=YES
user_config_dir=/etc/vsftpd/vsftpd_user_conf
dual_log_enable=YES
guest_enable=YES
guest_username=ftp
virtual_use_local_privs=YES
local_root=/var/vsftp

A few things to point out in that configuration so you know what’s going on. We disable anon logins, that’s important. Take note of the pam_service_name of “vsftpd-virtual” we’ll be using that in a second to setup our authentication. Also look at the user_config_dir, that will give us a lot of the options people look for and don’t know how to configure. And of course the chroot_local_user is YES.
Now, mkdir /etc/vsftp and mkdir /etc/vsftpd/vsftpd_user_conf.
change directory to /etc/vsftp and vi virtual-users.txt (or whatever editor you use, I’ll recommend vi). This file lists our virtual users with username, password each on their own new line. So if you wanted two virtual-users and maybe one “admin” account for you, you could have the file be:

admin
adminpass1sh3r3
ftpuser1
us3r1p@ss
ftpuser2
correcthorsebatterystaple

Each username is followed by that user’s password on the next line. Since this is a very sensitive file, as root you should chmod 0600 virtual-users.txt. Now lets create the encrypted db that vsftpd will use to authenticate these users with: db4.6_load -T -t hash -f virtual-users.txt virtual-users.db. This converts that .txt file into a Berkeley v4.6 database format. The benefit of keeping the original .txt file around is that you can easily add additional users or tell them their password when they inevitably forget it.
Now lets finish up the authentication with vi /etc/pam.d/vsftpd-virtual. This is the pam service name in our vsftpd.conf file. Make this file look like:

# Standard behaviour for ftpd(8).
auth required pam_listfile.so item=user sense=deny file=/etc/ftpusers onerr=succeed
# Note: vsftpd handles anonymous logins on its own. Do not enable
# pam_ftp.so.
# Standard blurb.
# @include common-account
# @include common-session
# @include common-auth
auth required pam_listfile.so item=user sense=deny file=/etc/ftpusers onerr=succeed
auth required pam_userdb.so db=/etc/vsftpd/virtual-users
account required pam_userdb.so db=/etc/vsftpd/virtual-users

That’s it for the authentication. Now let’s lock down our users’ chroot location and give them access only to certain commands. Go to your /etc/vsftp/vsftpd_user_conf directory or whatever you’ve made it. In there create a file with each of the user’s user-names. So using the example virtual-users.txt file I’d create an admin file, a ftpuser1 and ftpuser2 file. Simply vi admin
and in there put:

# user_sub_token=$USER
local_root=/var/ftp

Note we’ve commented out the user_sub_token but for ftpuser1 we could use it like this:

user_sub_token=$USER
local_root=/var/ftp/$USER

Now you can simply cp ftpuser1 ftpuser2 and you have two virtual users who are chrooted to /var/ftp/ftpuser1 directory and /var/ftp/ftpuser2 directory respectively. Meanwhile if you login as admin you’ll be chrooted a level up from there so you will see and have full access to those two folders.
Now how can we restrict what they can or can’t do within their chrooted environment? Well I take a text file of all FTP commands and save it on the system for reference:

#
# List of FTP commands
#
# ABOR - Abort an active file transfer.
# ACCT - Account information.
# ADAT - Authentication/Security Data (RFC 2228)
# ALLO - Allocate sufficient disk space to receive a file.
# APPE - Append.
# AUTH - Authentication/Security Mechanism (RFC 2228)
# CCC - Clear Command Channel (RFC 2228)
# CDUP - Change to Parent Directory.
# CONF - Confidentiality Protection Command (RFC 697)
# CWD - Change working directory.
# DELE - Delete file.
# ENC - Privacy Protected Channel (RFC 2228)
# EPRT - Specifies an extended address and port to which the server should connect. (RFC 2428)
# EPSV - Enter extended passive mode. (RFC 2428)
# FEAT - Get the feature list implemented by the server. (RFC 2389)
# HELP - Returns usage documentation on a command if specified, else a general help document is returned.
# LAND - Language Negotiation (RFC 2640)
# LIST - Returns information of a file or directory if specified, else information of the current working directory is returned.
# LPRT - Specifies a long address and port to which the server should connect. (RFC 1639)
# LPSV - Enter long passive mode. (RFC 1639)
# MDTM - Return the last-modified time of a specified file. (RFC 3659)
# MIC - Integrity Protected Command (RFC 2228)
# MKD - Make directory.
# MLST - Lists the contents of a directory if a directory is named. (RFC 3659)
# MODE - Sets the transfer mode (Stream, Block, or Compressed).
# NLST - Returns a list of file names in a specified directory.
# NOOP - No operation (dummy packet; used mostly on keepalives).
# OPTS - Select options for a feature. (RFC 2389)
# PASS - Authentication password.
# PASV - Enter passive mode.
# PBSZ - Protection Buffer Size (RFC 2228)
# PORT - Specifies an address and port to which the server should connect.
# PWD - Print working directory. Returns the current directory of the host.
# QUIT - Disconnect.
# REIN - Re initializes the connection.
# REST - Restart transfer from the specified point.
# RETR - Retrieve (download) a remote file.
# RMD - Remove a directory.
# RNFR - Rename from.
# RNTO - Rename to.
# SITE - Sends site specific commands to remote server.
# SIZE - Return the size of a file. (RFC 3659)
# SMNT - Mount file structure.
# STAT - Returns the current status.
# STOR - Store (upload) a file.
# STOU - Store file uniquely.
# STRU - Set file transfer structure.
# SYST - Return system type.
# TYPE - Sets the transfer mode (ASCII/Binary).
# USER - Authentication username.

Then go back into the user file, for example ftpuser1 and add this:

cmds_allowed=ABOR,ACCT,ALLO,APPE,CCC,CDUP,CWD,EPSV,LIST,MDTM,MLST,MODE,NLST,NOOP,OPTS,PASS,PASV,PBSZ,PORT,PWD,QUIT,REIN,REST,RETR,RNFR,RNTO,MKD,SITE,SIZE,STAT,STOR,STRU,SYST,TYPE,USER

This allows full download and upload of files, leaving off some potentially dangerous commands. Also this user can NOT delete or create directories. And if you want the user to only be able to download or upload then leave off STOR or RETR respectively.

This layout is great because if your users need files you don’t need to login as them to put them there. You can go in with your “admin” account, which still has no real access to your actual server filesystem, and put things into your user’s chrooted directories. You can lock down your users not only to certain directories but build a hierarchy such that some users have a chrooted environment that is contained by another user’s. Finally, you can dictate which commands each user is allowed to run giving you beautiful granular management of your entire FTP server. For the past decade this configuration has yet to fail me on its adaptability and security with FTP server’s having a multitude of specific uses.

Jun 232011
 
Using AppArmor

Application Armor.  I’m sure you can imagine what this little utility does but allow me to elaborate a bit and explain why configuring and using AppArmor has become Part III of our little series.  First let’s quickly look at what we have so far if you’ve gone through Quick Secure Setup Part I and Quick Secure Setup Part II.  From the security standpoint your server has every non-essential port turned off.  Only applications you explicitly care about have their ports open through your firewall.  You are getting security updates within 24 hours and are reviewing the logs for anomalies.  For the ports that are open, fail2ban is checking the logs for errors and banning the offending IPs.  So what’s left?  Well a whole lot actually but we won’t concern ourselves with every little aspect of security.  We are securing the main attack vectors to a standard server sitting out on the internet and there is one attack left that will help you sleep better at night if you have a defense against it.

The zero day exploit.  You’ve done all this to your server and it’s locked down nice and tight.  Along comes a new exploit to apache, ntp, postfix, ssh, vsftpd, etcetera, ports you need open to run your services and before any security updates can be released to address the issue, the black hats out there have a tool ready to take advantage.  It might be an exploit that allows someone to crash the exploited service so, for example your web site would go down but no real compromise has taken place, all the way to something allowing them to escalate to superuser privileges.  You’ve gone to bed confident and secure and woke up with your server in some not happy place and you following right behind.  How do we alleviate this?  It’s the next most common thing that could happen that we’ve yet to address.  Oh, wait, you’ll skip this Part?  Exploits aren’t that common?  You might want to look at the US-CERT Cyber Security Bulletins and subscribe to their RSS feed.  Keep in mind despite all the efforts we take here you won’t be left with an impenetrable fortress.  There’d be much more to do for that.  What you will be left with however, is a server that would require someone to target it explicitly, with motivation, to find a way to compromise it.  Keep in mind, YOU will potentially be one of the weak links for your server’s security.  On a sufficiently secured server it may be much easier for an attacker to leverage social engineering tactics to gain some sort of access so be careful about the type of information you divulge on those social applications, chats, forums, etcetera.

Now lets get on with business, we have zero day exploits and application bugs to defend against.  Welcome AppArmor.

Useful AppArmor Links

Peruse the links above so you can start familiarizing yourself with what AppArmor does but let me explain in completely non-technical terms.  The applications on your system serve some function for you, the user.  Just as say a user will use a car to go from point A to point B.  Imagine the user has many options for getting from point A to point B just as an application may be able to do its job in different ways.  The best way might be for the user to take the highway arriving in great time and this might be the way programmed into the application.  But the user could drive the car on local roads or even off-road.  When the user does this they could be affecting other things not intended within the “system”.  For example the user could take the car onto private roads not intended for its traffic, or crash into a mailbox on its way.  All this would be bad and an exploited application could do the same type of things, going places it was never intended or affecting another process on your system it otherwise wouldn’t.  Now if we create an effective AppArmor profile for this application it’s like we’ve built a railroad directly from point A to point B.  The application, now a locomotive instead of a car, cannot leave the rails.  It has one allowed path it is now restricted to and its the path we defined by either laying down the rails or configuring the AppArmor profile.  So even if the application is exploited and wants to leave the rails to drive elsewhere, it simply can’t, it is contained.  This won’t be a step by step guide at building AppArmor profiles. I’ll give you a good overview and pointers but the best thing to do is read and most importantly in this case experiment for yourself. One of the better guides on AppArmor profiles I’ve found is Introduction to AppArmor by bodhi.zazen

First step, you should identify the applications or processes you would like to define an AppArmor profile for.  You could run a command like netstat -tulpn and take a look at the output. Anything that lists the Local Address as 127.0.0.1 isn’t listening from the outside world but anything else is a candidate to AppArmor. Or you could look at ufw status verbose and see the ports you have open to the outside world. Any application listed there is a great candidate to AppArmor.  The first thing to run is apparmor_status. It will print out a nice listing of the default profiles on your server and tell you the profiles defined that are in enforce mode. On one of my servers just mysqld and ntpd are in enforce mode by default. There are two other main processes running that have external ports open, apache2 and sshd. So lets work on the steps required to create a profile for apache2, then you can follow that same framework for any other processes you want to AppArmor on your server.  First we run aa-genprof /usr/sbin/apache2 to start creating/editing the profile for apache.  Now with the later ubuntu versions there is a generic, very permissive profile already in place so it would build off of that. Finish the genprof and run aa-complain /etc/apparmor.d/usr.lib.apache2.mpm-prefork.apache2. This puts apache2 in complain mode. Now run your site for a while and do everything you normally do with it. You can also just leave apache in complain mode for a day or two as your site goes through its normal usage. Once you’ve done enough run aa-logprof and any “complaints” apache had regarding its profile you can now step through the wizard and add to the profile. To enforce the apache profile its simply aa-enforce /etc/apparmor.d/usr.lib.apache2.mpm-prefork.apache2. Apache is a special case with apparmor as there is an apparmor module you can enable a2enmod apparmor which then will enable you to lock down your virtual hosts correctly via AAHats. So for this site I went into the <Directory /var/www/mysite> area and added AAHatName blog.vigilcode.com, restarted apache and ran aa-logprof again and it stepped through new things to add to the profile and more specifically the new Hat.  You can find more detail on this process here.

Apache is more challenging to AppArmor because of the child processes is spawns and with virtualhosts each being another “server” in essence there is more to do and follow. When learning and getting familiar with AppArmor I think the best first step is to use your desktop and create an AppArmor profile for your web browser. It’s something you use all the time so can leave in complain mode for a while, then throw it in enforce and get accustomed to debugging issues by putting back in complain mode and either manually looking over the log or running aa-logprof to see what else came up.

This is definitely the least “quick” part of the Quick Secure Setup series but if you can take your time and master a few good profiles for your servers with the other steps taken so far then you won’t have many security worries.

Future: I’m currently keeping my eye on Tomoyo Linux as an AppArmor competitor so you might want to take a gander over there if for nothing else but for the knowledge.

May 092011
 

Welcome to Part II of the Quick Secure Setup Series.  Be sure to check out Quick, Secure Setup Part I first, although this can be taken on its own if you’d just like to configure UFW with Fail2ban correctly.

At the end of Part I we quickly setup a basic iptables config just to get the firewall up and doing its job.  The problem with iptables, isn’t actually a problem with iptables itself, but rather the administrator running it.  Iptables is a great firewall and like any great firewall there is a lot you can configure it to do.  The more configuration options open to the user, the more complicated a piece of software can get.  What I’ve witnessed is a well intentioned user will configure and run iptables on Day 1 of their server, just as we did in Part 1, but as time moves on and they need to run more applications or find themselves with something not working just right that seems to behave fine once iptables is stopped, then iptables either gets turned off or mis-configured with larger holes than what is needed.  Unless you are a linux administrator of some sort you probably are going to learn just enough of iptables to get it running on that initial setup.  After that you don’t really touch a firewall on a day to day basis so by the time you have this new application installed that isn’t playing nice with your current iptables you don’t want to take the steep learning curve plunge to figure out the correct  configuration you would need.  Therein would lie your chink in the security chain.

Starting with UFW for the first time check the UFW Ubuntu Wiki.  The introduction on this page explains perfectly why one would want to use UFW over iptables.

The Linux kernel in Ubuntu provides a packet filtering system called netfilter, and the traditional interface for manipulating netfilter are the iptables suite of commands. iptables provide a complete firewall solution that is both highly configurable and highly flexible.

Becoming proficient in iptables takes time, and getting started with netfilter firewalling using only iptables can be a daunting task. As a result, many frontends for iptables have been created over the years, each trying to achieve a different result and targeting a different audience.

The Uncomplicated Firewall (ufw) is a frontend for iptables and is particularly well-suited for host-based firewalls. ufw provides a framework for managing netfilter, as well as a command-line interface for manipulating the firewall. ufw aims to provide an easy to use interface for people unfamiliar with firewall concepts, while at the same time simplifies complicated iptables commands to help an adminstrator who knows what he or she is doing.

You may also find useful the Ubuntu Community page on UFW.  There are helpful links at the bottom of that page to continue reading.  Some quick google searches will get you moved to UFW quite easily.  Remember you are using the same backend as iptables just using a less complicated front-end to get the rules going.  So if you flush your current iptables and put in some basic ufw rules for your ssh, apache, and in my case NTP your ufw status output could look like this:


Status:
activeTo      Action   From
--------      ------   ----
OpenSSH       LIMIT    Anywhere
Apache Full   ALLOW    Anywhere
123           ALLOW    Anywhere

Now recall we changed our OpenSSH port in Part I, so to keep UFW simple I’ve edited the openssh-server file in /etc/ufw/applications.d to reflect our custom port. For any custom ports you find yourself opening or blocking consider creating an application profile for it, it’ll be easier to read your rules and if you don’t touch them for months, easier to remember when you have to re-visit them. You can easily see your apps and their configuration with ufw app list and ufw app info OpenSSH. Once you have your basic UFW configuration in place you should install fail2ban: aptitude install fail2ban.

Fail2ban is a very simple yet very useful application that simply looks at the log files you tell it about, parses them for certain errors or failures and then inserts a firewall rule to block the IP that caused that error or failure.  Trust me when I tell you that you want this.  Every server I’ve ever put on the internet gets scanned by scripts looking for open ports, trying ssh or ftp logins, attempting urls for various mysql, php, remote access URL’s, etcetera etcetera etcetera….  Here is a small example of ports that were scanned on my server:

Service: ms-sql-s (tcp/1433) ([UFW BLOCK])
Service: ssh (tcp/22) ([UFW BLOCK])
Service: sip (udp/5060) ([UFW BLOCK])
Service: 3389 (tcp/3389) ([UFW BLOCK])
Service: 27977 (tcp/27977) ([UFW BLOCK])
Service: radmin-port (tcp/4899) ([UFW BLOCK])
Service: 5900 (tcp/5900) ([UFW BLOCK])
Service: http-alt (tcp/8080) ([UFW BLOCK])
Service: loc-srv (tcp/135) ([UFW BLOCK])
Service: mysql (tcp/3306) ([UFW BLOCK])
Service: ms-sql-m (udp/1434) ([UFW BLOCK])
Service: 49153 (udp/49153) ([UFW BLOCK])
Service: 1022 (tcp/1022) ([UFW BLOCK])
Service: socks (tcp/1080) ([UFW BLOCK])

On the web server side various URL’s for web administration are always attempted like: /phpMyAdmin, /myadmin, /mysql, etcetera. Without Fail2ban in place these scripts can run until they’ve exhausted every login attempt they want, or every URL in their list. WITH Fail2ban we can give them 3-5 attempts and then realize they are a script kiddie and ban their IP from the server for X amount of time.

Out of the box Fail2ban works with iptables rules, however these don’t play nice with our simpler UFW commands so we need to make a couple edits to have Fail2ban block the IP’s with UFW.

First lets go into /etc/fail2ban/jail.conf and change a few default ban actions for ssh and apache to use ufw actions we will create:

[ssh]
enabled = true
banaction = ufw-ssh
port = 2992
filter = sshd
logpath = /var/log/auth.log
maxretry = 3


[apache]
enabled = true
port = http,https
banaction = ufw-apache
filter = apache-auth
logpath = /var/log/apache*/error*.log
maxretry = 4


[apache-filenotfound]
enabled = true
port = http,https
banaction = ufw-apache
filter = apache-nohome
logpath = /var/log/apache*/error*.log
maxretry = 3


[apache-noscript]
enabled = true
port = http,https
banaction = ufw-apache
filter = apache-noscript
logpath = /var/log/apache*/error*.log
maxretry = 6


[apache-overflows]
enabled = true
port = http,https
banaction = ufw-apache
filter = apache-overflows
logpath = /var/log/apache*/error*.log
maxretry = 2

In this file we are enabling the sections we want fail2ban to monitor and take action on. You’ll want to make sure your logpath points to your apache error logs and take note of the filter names as each of those corresponds to a file within the filter.d directory. All the filters are simply a regular expression to pattern match some error condition in the logfile, once matched fail2ban will execute the banaction. So if you look at the apache-auth filter, it will match for any user authentication failures to your websites. The only filter I’ve modified is the apache-nohome I’ve edited to match for any file not found error, not just checking for home directory attacks as the default.
The original regex was:
failregex = [[]client []] File does not exist: .*/~.*
and my modified version for any file not found errors is:
failregex = [[]client []] File does not exist: *
BE CAREFUL if you chose to also make this change. There are many things that will cause file not found errors that may not be attacks at all. Search bots looking for robots.txt, normal users can trigger on favicon.ico if you don’t have, etc. So if you make that change check your logs frequently and fix any valid file not found errors. The reason I turned this on is the constant attempts at bogus URL’s as I mentioned above where the scripts look for web GUI admin pages.

Now we simply need to create the valid banaction files we specified in our jail.conf. First is /etc/fail2ban/action.d/ufw-ssh.conf:

[Definition]
actionstart =
actionstop =
actioncheck =
actionban = ufw insert 1 deny from <ip> to any app OpenSSH
actionunban = ufw delete deny from <ip> to any app OpenSSH

and /etc/fail2ban/action.d/ufw-apache.conf:

[Definition]
actionstart =
actionstop =
actioncheck =
actionban = ufw insert 2 deny from <ip> to any app "Apache Full"
actionunban = ufw delete deny from <ip> to any app "Apache Full"

As you may see the ufw command here is quite simple. The actionban says to deny the offending IP for the specified application. The only gotcha here is we have to specify the line the rule is being inserted into as the order matters. Our original rules allow these apps so we must ensure that any denies to these apps come BEFORE the allow rule. As rules are processed in order if we have the allow first the offender will continue to hit our server as it will never hit the deny rule. So we make sure the denies get inserted before the allow lines and all is well. The great thing about UFW rules is that you can almost read them and understand what they are doing, as opposed to the standard iptables banaction which could look like this:

actionban = iptables -I fail2ban-<name> 1 -s <ip> -j DROP
actionunban = iptables -D fail2ban-<name> -s <ip> -j DROP

As you can see UFW provides for much more readability without that learning curve hit you’d have to go through to get a good grip on the iptables rules.

If you have logwatch configured as in Part I then you’ll see the bans that took place in the logwatch email for the day prior. For example the fail2ban section in one of my logwatch emails had this:

--------------------- fail2ban-messages Begin ------------------------
Banned services with Fail2Ban:                 Bans:Unbans
apache-filenotfound:                            [ 3:3 ]
90.80.141.37 (37-141.80-90.static-ip.oleane.fr)   1:1
92.82.225.197 (adsl92-82-225-197.romtelecom.net)  1:1
120.70.227.130                                    1:1
---------------------- fail2ban-messages End -------------------------

As I mentioned earlier, for the first week with this configuration you should check your apache error log and make sure those file not found errors were scripts looking for /phpmyadmin or some other page that truly doesn’t exist and not a normal user getting file not found errors because of favicon.ico or something else.

That’s it! ufw status will show you any of the rules in effect on your system. After these first two parts are executed you’ll have a server configured securely with nothing unnecessary open to the internet, and those ports that are open now are blocking some bad guys from messing with them too much. For Quick Secure Setup Part III, the last in the series, we’ll tighten everything up even more and end up with server security that is second to none.

Apr 162011
 
The Not So Humble Bundle

Ever play or even hear of Indie games? Many have not but a little piece of humble pie and some marketing genius has brought them into the limelight. Indie games aren’t a type of game but rather a classification of any game that is developed and distributed without a large video game publisher behind it. At first glance you think,

Big deal, if someone has a good enough game it will sell regardless.

Alas, it isn’t quite that easy. There are a good number of independent (hence indie) game developers out there without deep pockets hence getting your game known to the world, despite its merits, can be quite challenging. Enter the Humble Bundle. First started in May of 2010, the idea was to get a group of these indie games together as a bundle and sell them for… wait for it… whatever you want to pay! What?! That’s stupid, crazy, and will drive these guys straight out of business? No? It raised $1.27 million dollars total and the actual developers of the game ended up with roughly $166,000 each. This is GREAT money for an independent game developer.
Now why did the developers get so much less? Well that’s part of the beauty and genius of this system. Not only do you pay what you want but you also can divide up what you pay to a few different parties:

  • Child’s Play – a charity that brings video games to hospitalized children and helps to fight the stigma of video games
  • Electronic Frontier Foundation – Defender of digital rights.  Aligns with these Indie games that are all released DRM-free.
  • Humble Bundle, Inc. – the company that develops the promotion, pays for the site/server and bandwidth needed to run it all.

By default the amount you enter is divided up with 55% going to the developers and then 15% each to the above mentioned parties.  The option to give to charity and divide the money however you wish I think are just 2 more things that help drive traffic and sales.  These bundles might be humble in the context of the small teams that create them, but this sales and distribution method is epic.  The second humble indie bundle was launched in December of 2010 and by the time it ended there was $1.8 million raised.  There is now a third humble indie bundle but its being called the Humble Frozenbyte Bundle since all the included games in this round are from the indie developer Frozenbyte.

Another great thing about these games is that they are multi-platform.   It’s not very easy to find games that will work on Windows, Mac, and Linux.  When you “buy” these via the Humble Bundle you get a link to download all the different games for all the different platforms.  In theory this link will be available for a long time but it’d be wise to download any/all that you think you will ever want and save locally.

There are a lot of things the Humble Bundle did to form a synergy of sorts, driving traffic, sales, and making all parties involved earn some money they would have otherwise never seen.  I hope this model not only continues but can spread to other games and even completely different markets.

Apr 082011
 

Welcome to the first part of my Quick Secure Setup series.  This series serves to accomplish a primary goal.  To help one quickly and efficiently configure their ubuntu server in such a way that it is very secure, WHILE still providing great usability and maintainability.  A lot of people have great intentions to secure their server, SELinux, detailed ip-tables configurations, log audits, etcetera.  I’ve found that after a short while because of the maintenance involved or the complexity of the configurations, your average user will not keep up with such a configuration and their server ends up exposed in one way or another.  The first part in this series will give you the initial, basic things you should configure or customize to get you on the road to an extremely secure server.  Future parts will go into more depth with varying applications to help you with their configurations.  Each will be straightforward and something you can implement very quickly and maintain quite easily.  Finally I’ll be referencing all this using the Lucid Lynx LTS 10.04 Ubuntu server, however much of this will apply to any distribution or Ubuntu version with minor tweaks.

Congratulations, you’ve built your Ubuntu Server.  First things first.  Don’t “enable” root.  Simply edit the /etc/group file and add your account(for this article’s sake your account = ‘myuser’) to the admin group. This will allow you to run any command with sudo or just sudo -i and you will drop into root’s shell as if you logged in as root. If you had set a password for the root account I suggest you fix it by doing the following:

  • sudo usermod -p '!' root
  • Then check the /var/log/auth.log over the next couple days for  entries like: CRON[14357]: pam_unix(cron:account): account root has expired (account expired)
  • If you see those then run: sudo passwd --unlock root followed by sudo usermod --lock root

Ok so we have a sudo enabled user, you need to securely login to this box without exposing/advertising your server to the internet so let’s lock up ssh a bit. First create a sshlogin group: groupadd sshlogin and add your 'myuser' account to that group with: usermod -a -G sshlogin myuser. Now edit the /etc/ssh/sshd_config file.
Starting near the top you’ll see a line with Port 22. I strongly suggest changing this to another port. Changing this port alone will stop a lot of attacks on your server (you’ll learn more of that in Part II).  For the sake of being easy to remember set that to Port 2222 now.  Most of the defaults are pretty good in the rest of the file but there are a few others we should change so ensure the following are set:

  • PermitRootLogin no
  • X11Forwarding no
  • UseDNS no
  • AllowGroups sshlogin

This last line ensures that only the users in the sshlogin group are allowed to even attempt to login over SSH to your server. This is a good extra bit of security as there are a lot of other default accounts on ubuntu and you don’t want script kiddies trying to utilize an exploit with known user names to get into a system.  Check this ubuntu community page, StricterDefaults for a few other nice tidbits in increasing your default security and apply what fits for your server.  At this point you now have one user, whose user name is not a known default, that is allowed ssh access to your system, on a non-standard ssh port.  Very nice, you are already more secure than most servers out there!  I strongly recommend leveraging this ssh for file transfers (scp) as the data will be encrypted automatically for you.  Just remember using the non-standard port you’d have to specify scp -P 2222 ....first.  In Part II In another article, I’ll introduce a secure way to configure FTP as most people find it easier than scp.

Now let’s address something that is frequently overlooked.  Keeping the system or user installed packages updated. There is a concern that automatically updating packages could break something on your system that was previously working.  Although this is a valid concern, one nice feature of Ubuntu’s package system is the concept of “safe-upgrades” and “full-upgrades”.  Safe-upgrades are meant to be just that, rather safe to do at any time.  Now that doesn’t mean there is zero chance of a safe-upgrade package causing an issue but I’d say that the scale of risk is tipped in favor of applying these versus NOT and leaving your server at risk to needed patches, especially security based ones.  So what we should do is configure a system such that every night it will check for those security updates and apply them to your server.  To get started:
aptitude install unattended-upgrades update-notifier-common
To configure the settings for unattended-upgrades vi /etc/apt/apt.conf.d/50unattended-upgrades
Pay close attention to the first two sections of that file:

// Automatically upgrade packages from these (origin, archive) pairs
Unattended-Upgrade::Allowed-Origins {
"Ubuntu lucid-security";
// "Ubuntu lucid-updates";
};


// List of packages to not update
Unattended-Upgrade::Package-Blacklist {
// "vim";
// "libc6";
// "libc6-dev";
// "libc6-i686";
};

What you can see is that I am only allowing updates from lucid-security which is quite safe. You could also add lucid-updates by removing the // from the front of the line and then in the second section enter the packages you do not want to be automatically upgraded. My issue with this is that you need to get the exact package name for the blacklist to work. The blacklist doesn’t support regex’s although unattended-upgrades is written in python and I have seen a simple edit to allow it to support regular expressions, but we won’t be doing that here. I see a lot of forum posts online where people thought they were blocking kernel upgrades but it didn’t match and so their server upgraded its kernel. I’d recommend just automatically doing the lucid-security updates and logging into your server to manually do any others.
Now before any of this will run automatically we need to create/edit one more file: vi /etc/apt/apt.conf.d/10periodic and add this to it:

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "1";

Finally we are on our last main section for Part I. You don’t want to be blind to what is going on with your server, but realistically you won’t log in everyday and scour through the logs (linux logs can be insanely detailed!). There is a good program I’ve used for years called logwatch that can easily help you out with this by giving you filtered logs all in one easy to “human brain parse” output. To install simply aptitude install logwatch. Now to tweak its settings a bit go to vi /usr/share/logwatch/default.conf/logwatch.conf. The main thing to decide is how you want to check this. If you’ve configured your server to be able to send outbound emails which is very handy then I recommend having the output of logwatch emailed to you. Otherwise you can set it to output to a file and just check that everyday which is still easier than checking multiple log files and scouring through their content. I find the html output harder to read than the text output so I just have it set to email me once a day. I also like the detail level set to Med. It provides much more valuable information than the default while not giving so much that you end up not reading it.

Finally, before we end Part I you need to make sure some sort of firewall is running. We’ll want to use iptables but one main complaint regarding it is that it can be difficult to maintain and setup if you aren’t accustomed to the way the rules need to work. I find people start with good intentions and configure it, then somewhere down the line they can’t get something working just right, they turn off iptables and the program works. Instead of figuring it out they leave iptables off. So for Part II we’ll go over an easy way to get iptables under your control but in the meantime you should apply some basic rules. You can copy and paste this as a script on your system and run it. You can even easily add a few other ports you have to the script and re-run it. It will always flush all rules and then re-apply based on what’s in the script. Currently it will allow our customized ssh port of 2222, and the two web ports of 80 and 443 which you’ll want open if you are running a web site.
So now create a script like vi iptables_config and paste in the following:

#!/bin/bash
#
# iptables example configuration script
#
# Flush all current rules from iptables
#
iptables -F
#
# Set default policies for INPUT, FORWARD and OUTPUT chains
#
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -N MY-Firewall-1-INPUT
# Setup new chain
iptables -A INPUT -j MY-Firewall-1-INPUT
iptables -A FORWARD -j MY-Firewall-1-INPUT
#
# Set access for localhost
#
iptables -A MY-Firewall-1-INPUT -i lo -j ACCEPT
#
# Accept packets belonging to established and related connections
#
iptables -A MY-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
#
# Allow SSH connections on tcp port 2222
# This is essential when working on remote servers via SSH to prevent locking yourself out of the system
#
iptables -A MY-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 2222 -j ACCEPT
#
# Allow web
iptables -A MY-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
iptables -A MY-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
#
# Reject everything else
iptables -A MY-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited
#
#
# Save settings
#
/sbin/iptables-save
#
# List rules
#
iptables -L -v

To install iptables if you don’t have it aptitude install iptables iptables-persistent
Run the script to set the rules.

That’s it! This is a tremendous start. As a quick security review here is what you now have:

  • non-default ssh port to stop script kiddies from hammering user login/password guesses
  • per sshlogin group only one non-default user is even allowed to ssh in even if someone were to target you and find your open ssh port
  • daily security updates being applied to minimize risk of an exploitable package causing any harm
  • daily log reports being emailed to you for review
  • nice, concise iptables script to ensure only the ports you intend are open to the outside world

Look out for the Quick Secure Setup Part II, where we will protect our server more by blocking attacks and managing our firewall easier.