Apr 252016
 
PiVPN Logo

So I really applaud the efforts and progress by the EFF for the Let’s Encrypt initiative.  In this post-Snowden era I believe it is very important for users to take their privacy and security into their own hands whenever possible.  Let’s Encrypt allows anyone that is running a website to easily be able to serve that site over an encrypted channel.  If you aren’t a technical person you should be able to get a free cert from Let’s Encrypt using any integrations they provide.  This is a great boon for those who have their own sites and blogs but what about the people at home who don’t run their own site?  They use the internet and rely on the various sites to determine whether they are secure.  A frequent piece of the solution for this is to leverage a VPN (Virtual Private Network).  It will encrypt and tunnel your traffic from your client side through to the VPN Server side.  Correct, from the VPN server out to the internet, you will again be unencrypted if the site doesn’t offer HTTPS connections, however from your local location to the VPN server you have great security.  This is important because frequently people use the internet in locations they can’t and don’t control the security of.  Any wifi hotspot, public place, friends house, etc., you have no clue what could be on that network intercepting your data or worse.  If you setup a VPN Server at home, where you trust your local network, then no matter where you are, you can VPN into your home network and it is as if you were using the internet from your house.  In other words if you are sitting in a Starbucks, you can VPN into your home VPN server and now you have complete encrypted traffic from the unknown and unsecured Starbucks wifi, direct to your home, where it then goes to the site you visited.  Sadly, for most, configuring and managing your own VPN server is a task not easily accomplished.

This loops us back to a Let’s Encrypt parallel.  Where Let’s Encrypt took a task that was challenging for many and made it greatly more accessible, PiVPN does the same for installing and managing an OpenVPN server.  What is this PiVPN?  If you’ve searched for how to install openvpn then you may have found it is non-trivial.  PiVPN makes installing openvpn easy, quick and fun.  If you are technical enough to get a Jessie Lite image up and running on a Raspberry Pi, you are now technical enough to run your own VPN Server thanks to PiVPN.  Once you have successfully logged into your Raspberry Pi, to have a fully working and manageable OpenVPN server your install process is a one line command:

curl -L https://install.pivpn.io | bash

Yes that is it.  You can literally hit ‘Enter’ through the install, but even if you are more technical the install will let you choose many different customization options along the way.  Once it is install you can manage the configuration (OVPN files) you install on your clients with simple commands on your server:

‘pivpn add’ – This will add clients and takes one optional parameter,

‘pivpn add nopass’ – This will add a client certificate without a password.  Only recommended if you really need it.

‘pivpn list’ – This will list the clients

‘pivpn revoke’ – This will remove clients

All the code for this installer is available on Github where questions and contributions are welcome!

As a final note, before you go run off and play with PiVPN on your own, I understand that some people may want to encrypt traffic leaving their home.  It is one thing to be in a public, untrusted place and encrypt the traffic to your home, where it then goes out normally to the internet.  But what if you don’t trust your own ISP?  Now you want to encrypt the traffic even leaving your home, maybe to a VPN endpoint out on the internet.  Time to pay for a service?  NO.  I’ve made sure PiVPN will also work if you boot up a free-tier Amazon server running the latest Ubuntu 14.04 server image.  So simply go create an account on Amazon’s AWS infrastructure, boot up a free tier ubuntu server and run the PiVPN install command.  Now you have your own VPN server out on the internet just like a paid service.

Feb 012016
 

Those who care about their communications (individuals or enterprise), may at one point decide to look into encrypting email. Email is unfortunately a product of its past, designed for sending communications from one mailbox and delivering to another across the internet, yet during a time where encrypting that communication wasn’t even an afterthought. There have been some bolt on patches to secure email but really a nice new protocol is needed. Being stuck with what we have, you may have decided for one reason or another that S/MIME certificates are what you’d like to use to secure your email. Definitely a lot of people are concerned with the privacy of their email if you look at the section, “How important is it that your online information remain private?” in this article. I recently needed to ensure such certificates were also FIPS compliant. I had a hard time using the normal openssl binaries and ensuring I was using FIPS compliant commands to generate the certificates. So first we will compile an openssl binary in FIPS mode. This binary will error as soon as we run a command that is not FIPS compliant, ensuring our resultant certs are good. Then I’ll show how to generate the certs to be either self-signed by your own CA if you will use and trust among friends/family or to have signed by an Enterprise CA if you are a company with a trusted Enterprise CA and clients that trust it. Regardless, at the end you’ll have S/MIME certs you can use in your mail clients for secure communications.

h4. Building OpenSSL with FIPS Mode

I’m using Ubuntu 14.04.03 LTS, instructions may vary slightly on a different target system.
First download and extract the openssl source tarballs we will need (the below are the latest at time of this writing but always grab the latest stable releases)

wget http://www.openssl.org/source/openssl-1.0.2e.tar.gz
wget http://www.openssl.org/source/openssl-fips-2.0.11.tar.gz
tar xvzf openssl-fips-2.0.11.tar.gz
tar xvzf openssl-1.0.2e.tar.gz

You’ll probably need build-essential package which I had already installed so go ahead and `aptitude install build-essential`.
Next lets build the FIPS module our openssl will need:

cd openssl-fips-2.0.11/
./config
make
make install

Near the bottom of that output you should see something like installing to /usr/local/ssl/fips-2.0, we will need that directory in a bit to reference.
Now lets compile our own openssl, cd into the openssl-1.0.2e/ dir you extracted above

./config fips shared
make depend
make
make install

What we did here was tell our compiled openssl that we have a shared fips module to use. The output of the above should tell you “OpenSSL shared libraries have been installed in:
/usr/local/ssl”

So what you’ve done is you have your normal system openssl completely intact but now in /usr/local/ssl you have the one compiled with FIPS support.
You can check the version
openssl version will come from your system and output something like `OpenSSL 1.0.1f 6 Jan 2014`
whereas if you cd /usr/local/ssl/bin and run ./openssl version you’ll see our fips one `OpenSSL 1.0.2e-fips 3 Dec 2015`.
Great now lets export a couple variables so that our compiled openssl can get to the shared fips module:
export LD_LIBRARY_PATH=/usr/local/ssl/fips-2.0 && export OPENSSL_FIPS=1
One final test to prove this openssl will error on anything that is not FIPS compliant is we can try to get an MD5 hash of a file.
./openssl md5 /home/user/somefile
and you’ll get some error output like:
Error setting digest md5
140006545020576:error:060A80A3:digital envelope routines:FIPS_DIGESTINIT:disabled for fips:fips_md.c:180:

since md5 hash is not FIPS compliant.

h4. Creating the FIPS S/MIME Certs

Now that we have an openssl that will only allow us to run things that are FIPS compliant we can generate some S/MIME certs.
I’m going to number the steps to take here to create your certs with a comment after each numbered step describing what the step is doing. Where you see multiple of the same number, that means you chose the step you want based on your desired outcome (what options you want, what CA will be used, etc.).
1. ./openssl genrsa -out newkey.key 4096 – where newkey.key is the key and can be named anything you want, we are just generating a 4096 bit key.
2. ./openssl pkcs8 -v1 PBE-SHA1-3DES -topk8 -in newkey.key -out enc_newkey.key – this takes our normal key above and encodes it in pkcs8 format. This is a common format used but you have options, like my next command uses v2 which isn’t as widely accepted and there is also pkcs12. If you think you need to use some variant of a command I specify here you can get more information with running ./openssl pkcs12 /? and it’ll output your options.
2. ./openssl pkcs8 -v2 des3 -topk8 -in newkey.key -out enc_newkey.key Here is another variant of the above command if you wanted to use version2 of pkcs8, again chose one of these commands to run.
*With any of the step 2 commands, you will be asked for a password, please enter something from 4 to 1023 characters long and then provide it when asked in step 3 below.
3. ./openssl req -new -key enc_newkey.key -out new_request.csr – Now we take our new pkcs8 encoded key and generate a CSR (Certificate Signing Request) with it.
(see CSR Creation Info below for examples of fields)

Depending on how you want to have your certificate signed, use ONE of the Step 4’s below:
4. ./openssl x509 -req -days 3650 -in new_request.csr -signkey enc_newkey.key -out email.crt – This is the self-signed option
4. ./openssl x509 -req -days 3650 -in new_request.csr -signkey enc_newkey.key -CA enterprise_ca.cer -CAkey enterpriseprivatekeynopass.pem -set_serial 13 -out email.crt – This shows using a CA you have the cert and key for.

Using a Microsoft Enterprise CA Web UI:
4.

Click on Request a Certificate Link
Click on the advanced certificate request link
Paste contents of CSR into top box (DO NOT include the Beginning and Ending lines!!!)
Click submit
Download the Base 64 Encoded Certificate and Chain, name them yournameB64.cer and yournameB64.p7b

If you used the Web UI then this is your Step 5:
5. openssl pkcs12 -export -out yourname.pfx -inkey yournamekey.p7b -in yournameB64.cer -certfile enterprise_ca.cer
otherwise
5. ./openssl pkcs12 -export -descert -in email.crt -inkey enc_email.key -out email.pfx
This is exporting your new cert and key into a pfx file that is generally used to import into mail clients to support S/MIME.

h5. CSR Creation Info
Example answers:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
—–
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:South Dakota
Locality Name (eg, city) []:Sioux Falls
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Company Inc
Organizational Unit Name (eg, section) []:Development
Common Name (e.g. server FQDN or YOUR name) []:John Doe
Email Address []:john.doe@company.com

Please enter the following ‘extra’ attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

Jan 112016
 

Let’s Encrypt is a non-profit organization that is lowering the bar in getting people to encrypt their websites. They are lowering the bar in two ways, first by making it easy and api driven to obtain and renew the certificates and second by making it entirely free. Note, these are Domain Validation certs not Extended Validation or Organization Validation certs, those you still should buy from a reputable company like DigiCert.
Understanding the importance of encrypting the web is Step 1 here. Go google search to convince yourself of that and then come back to get some free certs on your websites.
This method of using let’s encrypt described below will probably get much easier for non-technical people as their software matures. However, I’m completely happy with it as it will grab the certs only, not mess with my nginx/apache configs, work with multiple vhosts, and renew every 60 days. The first thing someone may balk at when looking into the Let’s Encrypt certs is that they give out certs that expire in 90 days and recommend renewing every 60. This isn’t what most people are used to but once you have it configured you will do nothing to make this happen and they briefly review the decision to use ninety-day lifetimes on their certificates as well.

Installing Let’s Encrypt & Obtaining a Certificate

First you will want to git clone their repository so ensure you have git installed and clone it:

apt-get install git
git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt/

Later to ensure you are using the latest version you will just run a git pull within the letsencrypt directory to pull down their latest changes.

At this point you need to simply run ./letsencrypt-auto with the parameters you want to get your cert.  It’s very helpful to know that ./letsencrypt-auto --help all will give you all the options and parameters you can use.
In my case I have a site like vigilcode.com that has a separate vhost for blog.vigilcode.com and forum.vigilcode.com and is also available at www.vigilcode.com. You can specify all of these in one command.

./letsencrypt-auto certonly --webroot -w /var/www/vigilcode.com --email youremail@yourdomain.com -d vigilcode.com -d www.vigilcode.com -d blog.vigilcode.com -d forum.vigilcode.com

This will put the certs in directories under /etc/letsencrypt/live/vigilcode.com since vigilcode.com was the first domain parameter. If you have a completely different domain as a vhost as well then simply run another command for that site like:

./letsencrypt-auto certonly --webroot -w /var/www/anothersite.com --email youremail@yourdomain.com -d www.anothersite.com -d anothersite.com

and letsencrypt will put those certs into /etc/letsencrypt/live/www.anothersite.com/

An example of a simple nginx config for “anothersite.com” that would redirect any normal http traffic to https and point to these letsencrypt certs is

server {
    listen 80;
    listen [::]:80;
    server_name www.anothersite.com anothersite.com;
    return 301 https://$server_name$request_uri;
}

server {
    # SSL configuration
    #
    listen 443 ssl;
    listen [::]:443 ssl;
    #
    gzip off;
    #
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_session_timeout 5m;
    ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:!aNULL:!eNULL:!MD5:!EXP:!PSK:!SRP:!DSS;
    ssl_certificate       /etc/letsencrypt/live/www.anothersite.com/fullchain.pem;
    ssl_certificate_key   /etc/letsencrypt/live/www.anothersite.com/privkey.pem;

    root /var/www/anothersite.com;

    index index.html;

    server_name www.anothersite.com anothersite.com;

    if ($request_method !~ ^(GET|HEAD|POST)$ )
    {
            return 405;
    }

    location / {
        # First attempt to serve request as file, then
        # as directory, then fall back to displaying a 404.
        try_files $uri $uri/ =404;
    }
}

This config should give you an easy A in the Qualys SSL Labs Server Test which I strongly recommend you run against your site to ensure you always have an A. I assume if I turn on HSTS I’d get an A+ out of it but didn’t want to enable that yet.

With the above I did encounter Insecure Platform Warnings in the output. To solve I wanted to switch to use Pyopenssl and then modify the client to use it as well.

apt-get install python-pip
pip install pyopenssl ndg-httpsclient pyasn1

and then added 2 lines to client.py per this diff:

git diff acme/acme/client.py
diff --git a/acme/acme/client.py b/acme/acme/client.py
index 08d4767..9481a50 100644
--- a/acme/acme/client.py
+++ b/acme/acme/client.py
@@ -11,6 +11,7 @@ import OpenSSL
 import requests
 import sys
 import werkzeug
+import urllib3.contrib.pyopenssl

 from acme import errors
 from acme import jose
@@ -19,6 +20,7 @@ from acme import messages


 logger = logging.getLogger(__name__)
+urllib3.contrib.pyopenssl.inject_into_urllib3()

Configuring Automatic Renewals

I found a nice script in the letsencrypt forums that accomplished most of what I wanted. I made a few edits so also uploaded my modified version of it here in case the forum copy disappeared.

If you look at the script the basic renewal command is ${LEBIN} certonly --renew-by-default --config "${LECFG}" --domains "${DOMAINS}"
The only pieces that is missing from our original command is that we were using the webroot authentication method and specifying the webroot path. Since I have a different webroot for some different virtual hosts I want to put them in their own .ini file. So instead of specifying a static .ini file in his script I changed it to be a variable matching the cert name

Line 125: LECFG="/etc/letsencrypt/${CERT_NAME}.ini"

Now I’d create /etc/letsencrypt/www.anotherdomain.com.ini and /etc/letsencrypt/vigilcode.com.ini
and give it an email = line and then also specify the webroot and webroot path as in:

authenticator = webroot
webroot-path = /var/www/vigilcode.com

Then you can just pop this into a crontab with crontab -e

10 2 * * 6 /root/scripts/auto_le_renew.sh vigilcode.com
15 4 * * 6 /root/scripts/auto_le_renew.sh www.anotherdomain.com

These will run every Saturday morning so when I wake up I’d have the success or fail message once the renewal is triggered.

Jan 082016
 

This post will serve to help step you through the process of rooting your Google Nexus device with the systemless root method. As any technique’s change due to software updates, I intend to keep this post updated with that latest information. First a bit of background to clarify the traditional root with what is now known as systemless root.
As google releases newer versions of Android, they are also attempting to increase the security of their handsets. They have enabled SELinux by default, added safetynet checks, and some boot up warning messages. The traditional root methods would install a superuser binary, “su”, onto the system partition of your phone, allowing apps to ask for root rights and escalate their privileges. Within the android community, we want the benefit’s that root access provides but we don’t necessarily want it at the cost of discarding all the security protections google is trying to include. In other words, we can attempt to be better citizens of the android rooting community by trying to have our root methods work within these security confines as much as possible. The first bit that happened along these lines was that instead of rooting the new marshmallow OS and disabling SELinux with custom kernels, the new root method patched the kernels and updated SELinux policies to include what is needed for root applications to run and request the escalation’s they typically need.  The second bit was the ability to root without touching the system partition at all, keeping it completely stock which has the unintended side effect of also allowing Android Pay to work on a systemless rooted device.

What is root and why care about systemless?

I’d like to define what root really means.  It’s quite simple really, root is obtaining elevated privileges on the device.  It does not guarantee anything else but those escalated privileges.  With other security methods being employed on devices, this does NOT mean you will be able to do everything with those escalated privileges.  Depending on your device, selinux policies may still hamper what you want to do, the system partition may still not be writable or may have no free space from the stock image.  Root doesn’t mean you can modify anything it just means you have the elevated root privileges on that device.

It’s important to have this understanding as we discuss systemless root.  Some of the benefits of a systemless root is being able to have those escalated privileges while not tripping many of the other security flags set in place.  You are able to keep a stock system image for the first time, in fact never even mounting it for rw access at all.  This currently allows for safetynet checks to pass and as such Android Pay even still works on devices rooted in this manner.  As the development of this method matures, receiving OTA updates may be easier as you’d just need to revert the modified boot.img (kernel) back to stock to apply the OTA update, then patch the new one to get root back.

Obtaining a pure systemless root

Getting systemless root and even manually installing any of the monthly security updates is pretty easy once you’ve done it.  It takes me about 5 minutes to install a monthly update and re-root on my Nexus devices.  It will only get easier if SuperSu adds reverting the modified boot.img.  First let’s get some pre-requisites out of the way.

Pre-requisites:

  1. Install the Android SDK.  From the link, grab the SDK Tools only for your platform.  Extract it to some folder in your OS, doesn’t really matter where, let’s just say you extracted it to the SDK folder.  In windows you want to go into android-sdk-windows that was extracted and run the “SDK Manager.exe”.  For mac/linux go into your extracted folder and look for a tools/ folder.  In there you’ll see and android executable file.  Run that.
  2. Once you have the Android SDK Manager open, click Deselect All on the bottom and just select to install the “Android SDK Platform-tools”, which is in the Tools section, and the “Android Support Library” down in the Extras section.  If you are on Windows also check the Google USB Driver in that section. Click to install.  Once completed the only real tools we now care about live in a platform-tools folder that is in your extracted sdk directory.  Everytime you go to update your device you should run the Android SDK Manager again and see if any of these 3 check boxes have updates available.  If they do, update them as it’ll help avoid potential future issues.
  3. Download the latest TWRP recovery for your device.  Using a web browser navigate to http://twrp.me, click on the Devices link in the upper right and search for your device, ie “Nexus 6p” or “Nexus 5X”, etc.  Once found, click on it, go down to the Download Links: section and click on the “Primary (Recommended)”.  Download the latest version of twrp-x.x.x.x.img file you see there.
  4. Download the Google Factory Image for your device.  Just find your device in that list and download the latest version which will be the last link in the column for your device.  You’ll want to extract this .tgz file but not the zip inside it.
  5. Next we need SuperSu itself.  Currently the version of SuperSu we want is available from Post #3 in this thread.  Since it is a Work in Progress (WIP).  You’ll want the latest version you see there which currently is “BETA-SuperSU-v2.66-20160103015024.zip”.  Don’t extract this zip just download it.

Now we are prepped, outside of checking for newer versions of these pre-req’s once in a while you don’t have to do these steps for every update.  Now lets root our Nexus device without modifying (or even mounting the /system partition).

Apply Systemless Root:

  1. First you’ll want to reboot your device into the bootloader.  So that I don’t have to baby step you here feel free to use google like ‘how to reboot <device name> into bootloader’ but basically power it off then hold vol down + power until you see the logo then release power.  Done successfully, you should see a screen with the android logo on its back and a bunch of smaller sized text on the bottom.
  2. Once in the bootloader, usb your device to your computer and using a command line go into your platform-tools folder from above.  Run the command fastboot devices
    Side Note: If you are in windows you’d probably have to type fastboot as fastboot.exe and if you are in mac/linux you may have to use ./fastboot to reference the one in the folder you are in. I’ve added the platform-tools directory to my path so if you do that in any OS, you can just type it as “fastboot” and not even be in the platform-tools folder. Regardless, once you see your device show up in the output of fastboot devices you can then proceed to unlock your bootloader. For the Nexus 6p and Nexus 5x you would use fastboot flashing unlock and for older Nexus phones you can use fastboot oem unlock. Look at your device and select Yes with the volume and power keys and now your device should show that it is bootloader unlocked.
  3. Now we can flash our previously downloaded factory image to either update to this version or do a full factory return to stock.  If you look in the extracted factory image you’ll see a .zip, a radio.img, bootloader.img and scripts called “flash-all”.  If you want to wipe your data and return to stock while flashing all these to their latest versions you can just run the flash-all script for your OS.  If you look on your device in the bootloader screen it will display the version of your bootloader and radio (could be labeled Baseband). You can then see the version of the two .img files and determine if they have been updated.  Since google is releasing monthly security updates, many times these are not updated.  If they are newer than what is on your device you would flash them with:
    fastboot flash bootloader bootloader-blah.img
    fastboot reboot-bootloader
    fastboot flash radio radio-blah.img
    fastboot reboot-bootloader
    

    To flash the rest of the device partitions you can type
    fastboot update image-blah.zip
    This will NOT wipe your data, you can run this to update every partition with that factory image. If you want to factory reset your device with this version then you’d add a “-w” to that command to tell it to wipe as in: fastboot -w update image-blah.zip
    After flashing the factory image, even if you intend to systemless root, let your device boot the first time and go through updating the apps. Once booted then power back off and go to the bootloader again.

    [UPDATE 05/2016] – Google has released the OTA images for Nexus devices available here.  So optionally, instead of running the fastboot update image-blah.zip command from above that is a few hundred megs and reflashes everything.  You can instead just apply your OTA via adb sideload as detailed on their site.  This should be a tad quicker and then you can continue on below just as before.

  4. Now we have the latest factory image on our device and we need to systemless root it. In order to flash the SuperSu zip we need to use a special recovery. That is the TWRP software we downloaded. You can either flash this recovery with fastboot flash recovery twrp.img or you can just boot into it temporarily in order to get root applied. I prefer the latter since it is one less thing to revert if I want to take an OTA update in the future. To just boot into twrp recovery we issue: fastboot boot twrp.img.  When twrp boots it will ask if you’d like to keep system read-only. You’ll want to say yes to that since we don’t want to even mount system for writing at all.
    Next we use the adb tool that is also in the platform-tools folder. We want to push our SuperSu zip file we downloaded from pre-requisite step 5. If you have that zip in your platform tools folder the command would be adb push BETA-SuperSU-vX.x.x.zip /sdcard/Download/. Don’t forget about that period at the end. Now before we install SuperSu we want to ensure it doesn’t bind mount a /system/xbin as that will trip SafetyNet checks. So we use adb again but this time the command is adb shell "echo BINDSYSTEMXBIN=false>>/data/.supersu"
    Now on your device you can click the Install button, navigate to the /sdcard/Download folder and flash the SuperSu zip file you see there.
  1. That’s it! Reboot your device, if TWRP asks to install SuperSu be sure to say NO.  This looks like a lot but once you go through it once, and have the pre-req’s all set, you can install the monthly updates in a matter of 5 minutes. The benefit is they post the updates to the google factory images before developer’s will even see the code in AOSP and before they ever push OTA updates. So by using this method you can get onto these updates right away without waiting for anything.

Apps with workarounds for Systemless root

Since we installed systemless root here and the idea is to keep the system partition completely pristine and stock I’d like to point out some of the more valuable root applications you can run while in this mode and a couple that typically need to modify system but you can have them function perfectly fine still.  First, I have root apps like Titanium Backup, Greenify, CF Lumen, Nova Launcher that all have root access and don’t touch my system.  One of the most important apps for me to use however is AdAway which needs to modify the /system/etc/hosts file in order to function properly.  Many people also use busybox which installs to system so both of these apps we need to do a couple workarounds to keep them from not touching our /system partition.

AdAway

For AdAway it has been made fairly simple.  You’ll want to go this XDA forum thread and download both the latest AdAway application and the zip mentioned for systemless hosts file.  Boot your device into that twrp recovery again.  Use adb to adb push these two files to /sdcard/Download. just as we did for the SuperSu zip.  In the recovery flash the AdAway_systemless_hosts.zip and reboot your phone. Back in your OS install AdAway from your /sdcard/Download folder using any file explorer like FX File Explorer or even from F-Droid.  Now run AdAway, leaving it at its default target hosts location of /system/etc/hosts.

How can AdAway write to /system/etc/hosts yet we still aren’t modifying our system partition?  Well that systemless_hosts zip you flashed setup a special mount on the filesystem of your phone such that /su/etc/hosts (our systemless root is /su) was bind mounted to /system/etc/hosts.  So even though AdAway thinks its writing to /system/etc/hosts it is really going to /su/etc/hosts which is mounted in our /data partition.  This might be hard for some to follow but just know if you flash that zip you can use AdAway as normal and it will not be touching your system.

Busybox

Getting busybox to not modify system is a bit more tricky.  Will update this post as I gather that information in detail.

Nov 302011
 
Mastering SSH Keys

Welcome to the wonderful world of SSH keys! If you don’t yet share my enthusiasm you soon will! SSH keys are a perfect way for you to control access to your machine, whether that be a very secure way for only you to have access, locking down other authorized users and preventing their passwords from getting distributed or stolen, or even allowing access to scripts for very specific purposes. SSH keys accomplish all of it and more. First, let me just lay out how simple SSH keys are in case you aren’t clear and maybe get discouraged by the length of this post. The basics of it are, you have a host server create a private and public key pair. You can optionally provide a password for the keys. You copy the public key to servers you wish to log on to, while keeping private key secured and on your system. Then when you login to the system the private key is used to decrypt the public (also with optional password) and you get in. It is that simple at its core, there are just a lot of various configuration options you can use to accomplish certain features or functionality that may be more desirable for you. The keys are very secure as using this method alone prevents against man-in-middle attacks and eliminates a few other possible break-ins from password only authentication.
The most common way I see SSH keys being used is as a password-less easy login for an admin to use for his remote linux systems. Although this is better than using a password only system you still are vulnerable to someone obtaining your private key and using it for instant access to any system with the corresponding public key. I prefer to use them as a type of two factor authentication for the ssh logins where I need to provide the password to the key plus (of course) have the actual key. Regardless, let’s get a basic configuration going and start using ssh-keys. First on the server side you can follow a config I’ve used in a previous post but general settings to ensure:

  • Port 2992 – (I always recommend changing the default port)
  • Make sure both “RSAAuthentication” and “PubkeyAuthentication” are both set to yes
  • PermitRootLogin no
  • AuthorizedKeysFile %h/.ssh/authorized_keys
  • I’d also only allow users in a certain group to use ssh for further protection so “AllowGroups sshlogin” for example and add users who will ssh into this system into the sshlogin group.

Now on your machine, the computer you will be ssh’ing from, you need to create the key pair and copy the public key to the host(s). Generate the key pair as follows:
Create the .ssh directory in your linux home directory if it doesn’t exist already. Ensure correct permissions with chmod 0700 .ssh. Then create the actual keys and when prompted specify a pass-phrase to use (This pass-phrase is the optional part if you just want to authenticate with the keys alone but as I mentioned above I prefer supplying one so you have a two factor authentication going on). Keep in mind this can be a passphrase not just a password.
ssh-keygen -b 2048 -t rsa -f ~/.ssh/MainKey
The -b specifies the length of the key and the -t the type (which is rsa or dsa).
I’d use RSA 2048 or if you are the more paranoid type RSA 4096. I won’t delve into the RSA/DSA debate except to say that I’ve done a LOT of reading on the topic and I’m choosing RSA keys and with sensitive servers I’d go with RSA at 4096.
[Update, Dec 2015] :: DSA is being deprecated in OpenSSH so I now strongly recommend RSA over it. If you are running version 6.5 or newer of openssh I’d actually recommend using ed25519 keys over RSA and you can read this blog post for more details. Typically you can check your version with ssh -V command. Your command to generate this key would simply be
ssh-keygen -t ed25519 -f ~/.ssh/MainKey
Note for Mac OSX users, the version of ssh is too old for ed25519 unless you are on OSX 10.11, El Capitan. If you are on an older Mac OSX version simply google for homebrew install of openssh to upgrade your version and use that.
[End Update]
Once the keys are created scp the public key to the server but copy the public key into the .ssh directory named as “authorized_keys” (or wherever you specified in sshd_config the AuthorizedKeysFile). You can append multiple public keys to the authorized keys file just cat mykey.pub >> authorized_keys for each public key.

Now you may have noticed above when generating the key pair we passed the -f option. This way we can name our private key something other than the default. If you do not specify the -f the default will be id_rsa with the corresponding id_rsa.pub. So now to ssh into our system we will want to specify the identity file to use as:
ssh -i /home/user/.ssh/MainKey user@myserver.com -p2992
The reason I want to show the more complicated use case of specifying a particular identity file is that you may want to have different key pairs for servers that are yours versus your employers’ or any other separation that would be important to you. You can simplify the management of which keys are used for which hosts by specifying in the ~/.ssh/config file.
vi ~/.ssh/config

Add both host names and their identity file as follows:
Host server1.myserver.com
IdentityFile ~/home/user/.ssh/MainKey
Host server1.workserver.com
IdentityFile ~/home/employee/.ssh/id_rsa

That’s all there is to your basic key based login! Now lets briefly go over many of the popular use cases for these ssh-keys so you can see how powerful and helpful they can be for you.

First, as I mentioned earlier, I frequently see administrators using these without a password so they can quickly login to the servers they manage. The problem is you are now solely relying on the security of that private key for complete access to those systems. I will strongly recommend (yet again) putting a pass-phrase on your keys and then if you want a quick password-less login then simply configure ssh-agent to cache your pass-phrase on your system for that session. For details just search for “ssh-agent cache password” and you’ll find a few examples. In this way if your key is somehow stolen that person still has no access to the servers as they don’t know the password, yet you have the efficiency of not typing a pass-phrase once you set up and configure ssh-agent to cache it on your system. Win-win here.

Next, we can limit by IP or IP subnet where users with ssh-keys are able to login FROM. So say you are the server admin and you have other users you manage on the server. Maybe you (especially after reading this) have a requirement that they only login with ssh-keys. Being security conscious you might have a few concerns, not the least are:

  • Did they create a passphrase or leave blank?
  • Will they give their key and pass to someone else to use?
  • What if their key is stolen or compromised?

One thing you can do if you just control the server with the public key, is limit by IP or subnet where they can be coming from. In this way if the users key is stolen it can’t be used anyway if that person isn’t trying to login from the same IP (or subnet) as where the original user is from. You can either wait till they login the first time and see what IP they are coming from or ask them to send you the subnet or possible IP’s they might come from. Once you have this IP or list of IP’s just open the authorized_keys file on your server and add a “from=” line to the beginning of their public key as shown below:

[root@mainserver .ssh]# cat authorized_keys
from="10.20.30.*,172.16.31.*,192.168.1.*" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEArAkcHTOXZiDxcJEHNmrJRoM2HJE9Rq1uoiVHuTSjgl0THp0UFDNepnmCvk5bX22KzjAUa8PWnq1ZDw5Uf9/i6N/SiXittnxT0dFnZGj6RRep5Ae3AmOJbg0i69XL9o2zBJnYo2JVPXJkCDhSvVWokZUn5QjaJiGNigP9plA1He94Slkhn2jTxx1iehx9Vy/ojnxsJDqpTa8hX1GQK/b1jzvWdN3Qg+EhlxBDfKSA8u4uGsPP+6hXGgFLRluG/6yizj8LDF1LRWIKYaBvaLNJ+720sAI9O4miHyxY4n3ghBDbULPoLGz5a6bYRJ9pY9i0ySQEmXNSD+0u31+fAaGGpQ== user@sauronseye

So if you have this public key and it is supposed to be used by a user at Company A and that user gives it to someone else at Company B or it is compromised by evil haxor, then that key whether it had a password or not will not allow them access. Then with your normal audits you do (right?) you see failed logins from a foreign IP and realize the key was compromised. Server is still secure your world is safe from harm :).

The last and yet another very common way of using ssh-keys is for specialized scripts use. You have a script on one server and at a certain point in the script you’d like it to kick off a command or script on another server. How do you do this? Well you can create an ssh-key specific for this purpose. Create your keys just as above but name its identity for the job at hand so you don’t get confused.
ssh-keygen -t rsa -f ~/.ssh/login_audit
For an example lets say you have primary internal server and you want it to go out and gather a report of all the last logins from your remote servers. You could put a script like this on the remote server:

# Script: last_logins.sh
last -a > /home/user/last_logins.barney
scp /home/user/last_logins.barney fred@mainserver:.
# End Script

Then in the authorized_keys file on the remote server edit the public key just as we added the “from=” line but instead use “command=”:

command="/home/user/scripts/last_logins.sh",no-port-forwarding,no-X11-forwarding,no-agent-forwarding [rest of pub key...]

Now whenever you ssh in using that identity file the only thing it will do is run that command and exit. There are countless uses for this alone and being aware that you can use ssh-keys like this will help you realize WHEN you should use them like this.
There is a bunch more you can do with ssh and ssh-keys so use this as a starting point and take advantage of the tools at your disposal!

~~~~~BONUS TWO-FACTOR AUTHENTICATION SECTION~~~~~
We discussed above using ssh-keys as two-factor authentication to login to a server via ssh. Some of you may be aware that Google offers a two factor authentication to login to your Google services (gmail, apps, etc.). They accomplish this with an application you can install called Google Authenticator.
If you don’t use google authenticator and 2-factor authenication on your google account you should strongly consider it. Gmail accounts are prime targets for being compromised and a lot of people use that account for a wide array of things that would make the compromise that much more damaging. Read some of the links below for starters:
http://www.multitasked.net/2011/jun/27/hacked-gmail-google-account/
http://threatpost.com/en_us/blogs/new-password-not-enough-secure-hacked-e-mail-account-100410

Once you have google authenticator installed cause now you fear for the safety of your gmail… what else can you do with this thing? Well we can use it for two-factor authentication to login to our machines, much in the same way we used the ssh-keys.
I’ve found a great post that details how to do this and don’t forget to read the comments there as they include a lot of valuable information as well:
http://www.mnxsolutions.com/security/two-factor-ssh-with-google-authenticator.html

Linked also in that post is how to use a YubiKey to login with two factor authentication. These are all great and really secure methods to login and authenticate to your systems.

Hopefully you can realize the importance of two factor authentication to secure the things that really matter to you. There are countless hacks and exploits and more found every day. Trust me it is much easier to take the extra 10 minutes to set this up than the hours or days of recovery from a security breach. One needs to be proactive about security because if you are reactive then you are too late.

Jul 252011
 
Here Come 4k Advanced Format HDDs

Looks like the 4k sector drives are really starting to hit mainstream. This year marks the first time I’ve received a new laptop that came with a 4k Advanced Format hard drive. The laptop was from Dell and we get in new Dell laptops all the time. The funny thing is that until I tried installing Windows XP on it (came with Windows 7), I would have never even known it was a 4k sector drive. It’s one thing for an industry to move to a new technology or practice but without digging around there is virtually no media or even clear labeling about this change for HDDs. For the most part it should be a non-event for your typical user. If you buy a new computer with such a drive you shouldn’t notice any difference. Though there are a few caveats I think people should keep in mind, hence the “Caution” in the title:

  •  If you use hard drive cloning software it’ll have to support the 4k sectors
  • If you buy a new USB disk that uses 4k sectors, be sure the OS you connect it to supports reading this.
  • If you buy a new computer and think to install an older OS on it, or buy a new HDD for an old computer, thinking to keep the old OS, either don’t get a 4k sector drive or ensure you can get it working –  continue reading…

For the most part you should be able to get everything working with 4k sector drives.  A lot of the drives out now are 4k internally but emulate 512 bytes to the OS to keep with the old standard.  These drives should be labeled or described as 512e drives.  You might see a performance hit when using those so either make sure you get a native 512 sector drive or you explore your options for getting the 4k sector drive working.   Western Digital for example, makes an align utility you can use on their 4k drives that will correctly align the partitions if you use the disk on an OS that doesn’t support the new sector size (like XP).  There seems to also be a jumper option on the WD drives that will get the partition aligned for the non supported OS’s.  From my quick research it seems any Linux kernel from 2.6.31+, Windows Vista, Windows 7, and Mac OSX 10.4+ should all be good reading and writing to the 4k sector drives.  It’ll be your Windows 2000, XP, etc OS’s that you’d have to pay close attention to.

I believe the 512byte sector also hit its size limit at 2TB disks.  So any drive larger than 2TB would have to be a 4k sector drive.  Which means 4k sector drives should hit their storage capacity at 16TB so I guess we’ll be revisiting this topic once manufacturers want to go over 16TB disks.   If you want to learn about why the industry needed to move to 4k sector drives and more details on the low level changes it makes to the disks then one of the best articles I’ve found on it is here: 4K Advanced Format Hard Disks.  Although this change is slipping into the mix very quietly there are clearly situations where you’ll want to know if the drive you are getting uses 4k sectors, take a moment to look into it so you don’t get hit with any unexpected surprises.

 

May 272011
 
Xirrus, Dense Wireless Solved

It wasn’t too many years ago that you could calculate the number of wireless clients you’d have to support simply by counting heads. Oh I have 100 people to support here? Okay, roughly 100 wireless clients then, max. Well that nice, simple equation is about as true as 1 + 1 = 1 nowadays. Think of all the things you might possess that have a Wi-Fi chip in them. In my workplace alone we have:

  • just about every cellphone (and I know a lot of people who regularly carry 2 phones with them for work vs personal)
  • laptops
  • tablets (iPad, Xoom, etc.)
  • Desk phones (Cisco 9971)
  • more than I’d like to admit, weird, random things discovered while scanning

My most accurate calculation is figuring a 4:1 ratio of wireless devices per employee. Think of your small 20 employee company. One normal access point could easily reach an area that 20 people would sit within. Yet if there were 80 wireless clients on that access point it could quickly get saturated. With cube farm layouts you could easily have hundreds of wireless clients within range of one access point. Typically in an enterprise world if you were to run into that situation you would have to deploy more access points in that area, not for range, but simply for throughput and load. Seems like more of a bandage on the issue than a true solution. Especially realizing the complexity of having multiple access points close to each other fighting in the same airspace. Well if you find yourself either needing more range out of single access point or more importantly needing to support a dense population of wireless clients then you should definitely look at what Xirrus has to offer.

Xirrus puts anywhere from 4 to 16 radios into an “access point” and in doing so truly solve your density issue with wireless clients. My example above is a manageable situation. Think about conferences or trade shows though and you see where there can be a huge demand for an access point that can handle hundreds of wireless clients within its range. I don’t want to go over all the technical specifications but let me point out a few of the most notable differences in this array. Instead of omni-directional antennas they use directional antennas covering the 4 basic directions (north, south, east, west). These 4, regardless of your model will be the only dual-band a/b/g/n radios. So their basic model, the XN4 has these 4 radios, the XN8 has these four and then adds 5ghz a/n radios in the NW,NE,SW,SE positions. There are XN12 and XN16 models each adding to your 5GHz band. As you can imagine, placing directional antennas in a circular array will give you a bit more range than an omni-directional one. As you move to the XN8+ they can more tightly focus the beam of the directional antenna giving an even greater range.

In the enterprise access point space there is a great need for wireless monitoring. In other words detecting other wireless signals within your air space so you can either move to a cleaner channel or identify rogue AP’s that should be shutdown or removed. Cisco has what they call Cisco CleanAir Technology. Its actually a great technology and Cisco implements it quite well. However one of your radio’s in a Cisco AP has to quickly switch to monitoring mode then switch back to accepting clients in order to maintain this discovery of your wireless signals. You could also buy an access point dedicated to monitoring but in a large deployment it would only monitor within the limits of its range. If you were to use a Xirrus XN8 you could designate one of the radios solely to monitoring this spectrum thus not messing around with that switchover to take clients, still having 7 left for throughput and you could do this for each access point increasing your monitoring coverage. From my initial research you could replace standard access points with ones from Xirrus at a 4:1 ratio quite easily.

The other nice tidbit these add are dual gigabit lan ports for either primary/failover or load balancing into enterprise switches. Considering the number of users they support this is almost a requirement but nice enough even for failover if you need to power cycle a core switch on your network these could stay up serving clients. These Xirrus Wi-Fi arrays lack virtually no core feature I could find within the enterprise AP space handily matching Cisco bullet point for bullet point. There are some differences of course in implementation but if you need to cover a further range, and more importantly a DENSE population of wireless clients you’ll want to check out Xirrus for a solution.