Jan 112016

Let’s Encrypt is a non-profit organization that is lowering the bar in getting people to encrypt their websites. They are lowering the bar in two ways, first by making it easy and api driven to obtain and renew the certificates and second by making it entirely free. Note, these are Domain Validation certs not Extended Validation or Organization Validation certs, those you still should buy from a reputable company like DigiCert.
Understanding the importance of encrypting the web is Step 1 here. Go google search to convince yourself of that and then come back to get some free certs on your websites.
This method of using let’s encrypt described below will probably get much easier for non-technical people as their software matures. However, I’m completely happy with it as it will grab the certs only, not mess with my nginx/apache configs, work with multiple vhosts, and renew every 60 days. The first thing someone may balk at when looking into the Let’s Encrypt certs is that they give out certs that expire in 90 days and recommend renewing every 60. This isn’t what most people are used to but once you have it configured you will do nothing to make this happen and they briefly review the decision to use ninety-day lifetimes on their certificates as well.

Installing Let’s Encrypt & Obtaining a Certificate

First you will want to git clone their repository so ensure you have git installed and clone it:

apt-get install git
git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt/

Later to ensure you are using the latest version you will just run a git pull within the letsencrypt directory to pull down their latest changes.

At this point you need to simply run ./letsencrypt-auto with the parameters you want to get your cert.  It’s very helpful to know that ./letsencrypt-auto --help all will give you all the options and parameters you can use.
In my case I have a site like vigilcode.com that has a separate vhost for blog.vigilcode.com and forum.vigilcode.com and is also available at www.vigilcode.com. You can specify all of these in one command.

./letsencrypt-auto certonly --webroot -w /var/www/vigilcode.com --email youremail@yourdomain.com -d vigilcode.com -d www.vigilcode.com -d blog.vigilcode.com -d forum.vigilcode.com

This will put the certs in directories under /etc/letsencrypt/live/vigilcode.com since vigilcode.com was the first domain parameter. If you have a completely different domain as a vhost as well then simply run another command for that site like:

./letsencrypt-auto certonly --webroot -w /var/www/anothersite.com --email youremail@yourdomain.com -d www.anothersite.com -d anothersite.com

and letsencrypt will put those certs into /etc/letsencrypt/live/www.anothersite.com/

An example of a simple nginx config for “anothersite.com” that would redirect any normal http traffic to https and point to these letsencrypt certs is

server {
    listen 80;
    listen [::]:80;
    server_name www.anothersite.com anothersite.com;
    return 301 https://$server_name$request_uri;

server {
    # SSL configuration
    listen 443 ssl;
    listen [::]:443 ssl;
    gzip off;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_session_timeout 5m;
    ssl_certificate       /etc/letsencrypt/live/www.anothersite.com/fullchain.pem;
    ssl_certificate_key   /etc/letsencrypt/live/www.anothersite.com/privkey.pem;

    root /var/www/anothersite.com;

    index index.html;

    server_name www.anothersite.com anothersite.com;

    if ($request_method !~ ^(GET|HEAD|POST)$ )
            return 405;

    location / {
        # First attempt to serve request as file, then
        # as directory, then fall back to displaying a 404.
        try_files $uri $uri/ =404;

This config should give you an easy A in the Qualys SSL Labs Server Test which I strongly recommend you run against your site to ensure you always have an A. I assume if I turn on HSTS I’d get an A+ out of it but didn’t want to enable that yet.

With the above I did encounter Insecure Platform Warnings in the output. To solve I wanted to switch to use Pyopenssl and then modify the client to use it as well.

apt-get install python-pip
pip install pyopenssl ndg-httpsclient pyasn1

and then added 2 lines to client.py per this diff:

git diff acme/acme/client.py
diff --git a/acme/acme/client.py b/acme/acme/client.py
index 08d4767..9481a50 100644
--- a/acme/acme/client.py
+++ b/acme/acme/client.py
@@ -11,6 +11,7 @@ import OpenSSL
 import requests
 import sys
 import werkzeug
+import urllib3.contrib.pyopenssl

 from acme import errors
 from acme import jose
@@ -19,6 +20,7 @@ from acme import messages

 logger = logging.getLogger(__name__)

Configuring Automatic Renewals

I found a nice script in the letsencrypt forums that accomplished most of what I wanted. I made a few edits so also uploaded my modified version of it here in case the forum copy disappeared.

If you look at the script the basic renewal command is ${LEBIN} certonly --renew-by-default --config "${LECFG}" --domains "${DOMAINS}"
The only pieces that is missing from our original command is that we were using the webroot authentication method and specifying the webroot path. Since I have a different webroot for some different virtual hosts I want to put them in their own .ini file. So instead of specifying a static .ini file in his script I changed it to be a variable matching the cert name

Line 125: LECFG="/etc/letsencrypt/${CERT_NAME}.ini"

Now I’d create /etc/letsencrypt/www.anotherdomain.com.ini and /etc/letsencrypt/vigilcode.com.ini
and give it an email = line and then also specify the webroot and webroot path as in:

authenticator = webroot
webroot-path = /var/www/vigilcode.com

Then you can just pop this into a crontab with crontab -e

10 2 * * 6 /root/scripts/auto_le_renew.sh vigilcode.com
15 4 * * 6 /root/scripts/auto_le_renew.sh www.anotherdomain.com

These will run every Saturday morning so when I wake up I’d have the success or fail message once the renewal is triggered.

Jan 082016

This post will serve to help step you through the process of rooting your Google Nexus device with the systemless root method. As any technique’s change due to software updates, I intend to keep this post updated with that latest information. First a bit of background to clarify the traditional root with what is now known as systemless root.
As google releases newer versions of Android, they are also attempting to increase the security of their handsets. They have enabled SELinux by default, added safetynet checks, and some boot up warning messages. The traditional root methods would install a superuser binary, “su”, onto the system partition of your phone, allowing apps to ask for root rights and escalate their privileges. Within the android community, we want the benefit’s that root access provides but we don’t necessarily want it at the cost of discarding all the security protections google is trying to include. In other words, we can attempt to be better citizens of the android rooting community by trying to have our root methods work within these security confines as much as possible. The first bit that happened along these lines was that instead of rooting the new marshmallow OS and disabling SELinux with custom kernels, the new root method patched the kernels and updated SELinux policies to include what is needed for root applications to run and request the escalation’s they typically need.  The second bit was the ability to root without touching the system partition at all, keeping it completely stock which has the unintended side effect of also allowing Android Pay to work on a systemless rooted device.

What is root and why care about systemless?

I’d like to define what root really means.  It’s quite simple really, root is obtaining elevated privileges on the device.  It does not guarantee anything else but those escalated privileges.  With other security methods being employed on devices, this does NOT mean you will be able to do everything with those escalated privileges.  Depending on your device, selinux policies may still hamper what you want to do, the system partition may still not be writable or may have no free space from the stock image.  Root doesn’t mean you can modify anything it just means you have the elevated root privileges on that device.

It’s important to have this understanding as we discuss systemless root.  Some of the benefits of a systemless root is being able to have those escalated privileges while not tripping many of the other security flags set in place.  You are able to keep a stock system image for the first time, in fact never even mounting it for rw access at all.  This currently allows for safetynet checks to pass and as such Android Pay even still works on devices rooted in this manner.  As the development of this method matures, receiving OTA updates may be easier as you’d just need to revert the modified boot.img (kernel) back to stock to apply the OTA update, then patch the new one to get root back.

Obtaining a pure systemless root

Getting systemless root and even manually installing any of the monthly security updates is pretty easy once you’ve done it.  It takes me about 5 minutes to install a monthly update and re-root on my Nexus devices.  It will only get easier if SuperSu adds reverting the modified boot.img.  First let’s get some pre-requisites out of the way.


  1. Install the Android SDK.  From the link, grab the SDK Tools only for your platform.  Extract it to some folder in your OS, doesn’t really matter where, let’s just say you extracted it to the SDK folder.  In windows you want to go into android-sdk-windows that was extracted and run the “SDK Manager.exe”.  For mac/linux go into your extracted folder and look for a tools/ folder.  In there you’ll see and android executable file.  Run that.
  2. Once you have the Android SDK Manager open, click Deselect All on the bottom and just select to install the “Android SDK Platform-tools”, which is in the Tools section, and the “Android Support Library” down in the Extras section.  If you are on Windows also check the Google USB Driver in that section. Click to install.  Once completed the only real tools we now care about live in a platform-tools folder that is in your extracted sdk directory.  Everytime you go to update your device you should run the Android SDK Manager again and see if any of these 3 check boxes have updates available.  If they do, update them as it’ll help avoid potential future issues.
  3. Download the latest TWRP recovery for your device.  Using a web browser navigate to http://twrp.me, click on the Devices link in the upper right and search for your device, ie “Nexus 6p” or “Nexus 5X”, etc.  Once found, click on it, go down to the Download Links: section and click on the “Primary (Recommended)”.  Download the latest version of twrp-x.x.x.x.img file you see there.
  4. Download the Google Factory Image for your device.  Just find your device in that list and download the latest version which will be the last link in the column for your device.  You’ll want to extract this .tgz file but not the zip inside it.
  5. Next we need SuperSu itself.  Currently the version of SuperSu we want is available from Post #3 in this thread.  Since it is a Work in Progress (WIP).  You’ll want the latest version you see there which currently is “BETA-SuperSU-v2.66-20160103015024.zip”.  Don’t extract this zip just download it.

Now we are prepped, outside of checking for newer versions of these pre-req’s once in a while you don’t have to do these steps for every update.  Now lets root our Nexus device without modifying (or even mounting the /system partition).

Apply Systemless Root:

  1. First you’ll want to reboot your device into the bootloader.  So that I don’t have to baby step you here feel free to use google like ‘how to reboot <device name> into bootloader’ but basically power it off then hold vol down + power until you see the logo then release power.  Done successfully, you should see a screen with the android logo on its back and a bunch of smaller sized text on the bottom.
  2. Once in the bootloader, usb your device to your computer and using a command line go into your platform-tools folder from above.  Run the command fastboot devices
    Side Note: If you are in windows you’d probably have to type fastboot as fastboot.exe and if you are in mac/linux you may have to use ./fastboot to reference the one in the folder you are in. I’ve added the platform-tools directory to my path so if you do that in any OS, you can just type it as “fastboot” and not even be in the platform-tools folder. Regardless, once you see your device show up in the output of fastboot devices you can then proceed to unlock your bootloader. For the Nexus 6p and Nexus 5x you would use fastboot flashing unlock and for older Nexus phones you can use fastboot oem unlock. Look at your device and select Yes with the volume and power keys and now your device should show that it is bootloader unlocked.
  3. Now we can flash our previously downloaded factory image to either update to this version or do a full factory return to stock.  If you look in the extracted factory image you’ll see a .zip, a radio.img, bootloader.img and scripts called “flash-all”.  If you want to wipe your data and return to stock while flashing all these to their latest versions you can just run the flash-all script for your OS.  If you look on your device in the bootloader screen it will display the version of your bootloader and radio (could be labeled Baseband). You can then see the version of the two .img files and determine if they have been updated.  Since google is releasing monthly security updates, many times these are not updated.  If they are newer than what is on your device you would flash them with:
    fastboot flash bootloader bootloader-blah.img
    fastboot reboot-bootloader
    fastboot flash radio radio-blah.img
    fastboot reboot-bootloader

    To flash the rest of the device partitions you can type
    fastboot update image-blah.zip
    This will NOT wipe your data, you can run this to update every partition with that factory image. If you want to factory reset your device with this version then you’d add a “-w” to that command to tell it to wipe as in: fastboot -w update image-blah.zip
    After flashing the factory image, even if you intend to systemless root, let your device boot the first time and go through updating the apps. Once booted then power back off and go to the bootloader again.

    [UPDATE 05/2016] – Google has released the OTA images for Nexus devices available here.  So optionally, instead of running the fastboot update image-blah.zip command from above that is a few hundred megs and reflashes everything.  You can instead just apply your OTA via adb sideload as detailed on their site.  This should be a tad quicker and then you can continue on below just as before.

  4. Now we have the latest factory image on our device and we need to systemless root it. In order to flash the SuperSu zip we need to use a special recovery. That is the TWRP software we downloaded. You can either flash this recovery with fastboot flash recovery twrp.img or you can just boot into it temporarily in order to get root applied. I prefer the latter since it is one less thing to revert if I want to take an OTA update in the future. To just boot into twrp recovery we issue: fastboot boot twrp.img.  When twrp boots it will ask if you’d like to keep system read-only. You’ll want to say yes to that since we don’t want to even mount system for writing at all.
    Next we use the adb tool that is also in the platform-tools folder. We want to push our SuperSu zip file we downloaded from pre-requisite step 5. If you have that zip in your platform tools folder the command would be adb push BETA-SuperSU-vX.x.x.zip /sdcard/Download/. Don’t forget about that period at the end. Now before we install SuperSu we want to ensure it doesn’t bind mount a /system/xbin as that will trip SafetyNet checks. So we use adb again but this time the command is adb shell "echo BINDSYSTEMXBIN=false>>/data/.supersu"
    Now on your device you can click the Install button, navigate to the /sdcard/Download folder and flash the SuperSu zip file you see there.
  1. That’s it! Reboot your device, if TWRP asks to install SuperSu be sure to say NO.  This looks like a lot but once you go through it once, and have the pre-req’s all set, you can install the monthly updates in a matter of 5 minutes. The benefit is they post the updates to the google factory images before developer’s will even see the code in AOSP and before they ever push OTA updates. So by using this method you can get onto these updates right away without waiting for anything.

Apps with workarounds for Systemless root

Since we installed systemless root here and the idea is to keep the system partition completely pristine and stock I’d like to point out some of the more valuable root applications you can run while in this mode and a couple that typically need to modify system but you can have them function perfectly fine still.  First, I have root apps like Titanium Backup, Greenify, CF Lumen, Nova Launcher that all have root access and don’t touch my system.  One of the most important apps for me to use however is AdAway which needs to modify the /system/etc/hosts file in order to function properly.  Many people also use busybox which installs to system so both of these apps we need to do a couple workarounds to keep them from not touching our /system partition.


For AdAway it has been made fairly simple.  You’ll want to go this XDA forum thread and download both the latest AdAway application and the zip mentioned for systemless hosts file.  Boot your device into that twrp recovery again.  Use adb to adb push these two files to /sdcard/Download. just as we did for the SuperSu zip.  In the recovery flash the AdAway_systemless_hosts.zip and reboot your phone. Back in your OS install AdAway from your /sdcard/Download folder using any file explorer like FX File Explorer or even from F-Droid.  Now run AdAway, leaving it at its default target hosts location of /system/etc/hosts.

How can AdAway write to /system/etc/hosts yet we still aren’t modifying our system partition?  Well that systemless_hosts zip you flashed setup a special mount on the filesystem of your phone such that /su/etc/hosts (our systemless root is /su) was bind mounted to /system/etc/hosts.  So even though AdAway thinks its writing to /system/etc/hosts it is really going to /su/etc/hosts which is mounted in our /data partition.  This might be hard for some to follow but just know if you flash that zip you can use AdAway as normal and it will not be touching your system.


Getting busybox to not modify system is a bit more tricky.  Will update this post as I gather that information in detail.

Dec 282015
Notes Tidbits

This post will hold little notes and tidbits that I want to ensure I don’t lose yet don’t require their own dedicated blog post to discuss.

Removing and cleaning up old linux kernels

I run a lot of ubuntu based linux servers and depending how they were partitioned out, over time, the unused and older linux kernels can chew up a ton of space. I used to just remove the old images and headers from the /boot partition and run update-grub2 to update the grub menu with what is left. That does free up space on the actual /boot partition but those unused kernels and images are still installed on your system. I’ve googled for the best way to remove these and found one command I’ve been using for a while now that works flawlessly and causes no issues as long as you follow one rule. The one rule: Before executing this command it is important to update your system to the latest kernel (aptitude update && aptitude full-upgrade), and then REBOOT to actually boot into it. Once safely on your latest kernel run this command to clean up every older version
dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge
It looks long and scary, but it really isn’t that bad once you break it down, which someone gratefully did in this post. That’s it, follow the one rule and run the one command and your server’s kernels will be nice and lean.

Jul 252011
Here Come 4k Advanced Format HDDs

Looks like the 4k sector drives are really starting to hit mainstream. This year marks the first time I’ve received a new laptop that came with a 4k Advanced Format hard drive. The laptop was from Dell and we get in new Dell laptops all the time. The funny thing is that until I tried installing Windows XP on it (came with Windows 7), I would have never even known it was a 4k sector drive. It’s one thing for an industry to move to a new technology or practice but without digging around there is virtually no media or even clear labeling about this change for HDDs. For the most part it should be a non-event for your typical user. If you buy a new computer with such a drive you shouldn’t notice any difference. Though there are a few caveats I think people should keep in mind, hence the “Caution” in the title:

  •  If you use hard drive cloning software it’ll have to support the 4k sectors
  • If you buy a new USB disk that uses 4k sectors, be sure the OS you connect it to supports reading this.
  • If you buy a new computer and think to install an older OS on it, or buy a new HDD for an old computer, thinking to keep the old OS, either don’t get a 4k sector drive or ensure you can get it working –  continue reading…

For the most part you should be able to get everything working with 4k sector drives.  A lot of the drives out now are 4k internally but emulate 512 bytes to the OS to keep with the old standard.  These drives should be labeled or described as 512e drives.  You might see a performance hit when using those so either make sure you get a native 512 sector drive or you explore your options for getting the 4k sector drive working.   Western Digital for example, makes an align utility you can use on their 4k drives that will correctly align the partitions if you use the disk on an OS that doesn’t support the new sector size (like XP).  There seems to also be a jumper option on the WD drives that will get the partition aligned for the non supported OS’s.  From my quick research it seems any Linux kernel from 2.6.31+, Windows Vista, Windows 7, and Mac OSX 10.4+ should all be good reading and writing to the 4k sector drives.  It’ll be your Windows 2000, XP, etc OS’s that you’d have to pay close attention to.

I believe the 512byte sector also hit its size limit at 2TB disks.  So any drive larger than 2TB would have to be a 4k sector drive.  Which means 4k sector drives should hit their storage capacity at 16TB so I guess we’ll be revisiting this topic once manufacturers want to go over 16TB disks.   If you want to learn about why the industry needed to move to 4k sector drives and more details on the low level changes it makes to the disks then one of the best articles I’ve found on it is here: 4K Advanced Format Hard Disks.  Although this change is slipping into the mix very quietly there are clearly situations where you’ll want to know if the drive you are getting uses 4k sectors, take a moment to look into it so you don’t get hit with any unexpected surprises.


Apr 162011
The Not So Humble Bundle

Ever play or even hear of Indie games? Many have not but a little piece of humble pie and some marketing genius has brought them into the limelight. Indie games aren’t a type of game but rather a classification of any game that is developed and distributed without a large video game publisher behind it. At first glance you think,

Big deal, if someone has a good enough game it will sell regardless.

Alas, it isn’t quite that easy. There are a good number of independent (hence indie) game developers out there without deep pockets hence getting your game known to the world, despite its merits, can be quite challenging. Enter the Humble Bundle. First started in May of 2010, the idea was to get a group of these indie games together as a bundle and sell them for… wait for it… whatever you want to pay! What?! That’s stupid, crazy, and will drive these guys straight out of business? No? It raised $1.27 million dollars total and the actual developers of the game ended up with roughly $166,000 each. This is GREAT money for an independent game developer.
Now why did the developers get so much less? Well that’s part of the beauty and genius of this system. Not only do you pay what you want but you also can divide up what you pay to a few different parties:

  • Child’s Play – a charity that brings video games to hospitalized children and helps to fight the stigma of video games
  • Electronic Frontier Foundation – Defender of digital rights.  Aligns with these Indie games that are all released DRM-free.
  • Humble Bundle, Inc. – the company that develops the promotion, pays for the site/server and bandwidth needed to run it all.

By default the amount you enter is divided up with 55% going to the developers and then 15% each to the above mentioned parties.  The option to give to charity and divide the money however you wish I think are just 2 more things that help drive traffic and sales.  These bundles might be humble in the context of the small teams that create them, but this sales and distribution method is epic.  The second humble indie bundle was launched in December of 2010 and by the time it ended there was $1.8 million raised.  There is now a third humble indie bundle but its being called the Humble Frozenbyte Bundle since all the included games in this round are from the indie developer Frozenbyte.

Another great thing about these games is that they are multi-platform.   It’s not very easy to find games that will work on Windows, Mac, and Linux.  When you “buy” these via the Humble Bundle you get a link to download all the different games for all the different platforms.  In theory this link will be available for a long time but it’d be wise to download any/all that you think you will ever want and save locally.

There are a lot of things the Humble Bundle did to form a synergy of sorts, driving traffic, sales, and making all parties involved earn some money they would have otherwise never seen.  I hope this model not only continues but can spread to other games and even completely different markets.

Mar 232011
Cyberpower Cyberformance

I’ve been using APC battery backup products for decades now.  Anytime I had ever looked at another brand they always fell short in two primary areas for me, reliability and features.   I can’t remember the last time I did some research on home office or personal UPS devices but it’s clear APC doesn’t own this market anymore.  Tripp Lite and what I’ll call the newcomer, CyberPower seem to be hitting the sweet spot better than APC for home users or small offices.  [Side note: When talking about enterprise and real server rooms/NOCs I’m still APC all the way.]

Beginning my perusal of a replacement for my APC  Back-UPS RS 1500 I was hoping to find something priced well, with good reviews, that had a nice LCD panel in the front for displaying information.  A good LCD panel and front panel interface is rather important for me because in my setup I have no machine to install the company’s power management software.  I primarily use these on NAS boxes, in my case in particular a ReadyNAS NV+ and then the other router, modem, switch, etc. devices that make up my primary network (nothing that one would be installing software on).  After looking at the product line-up from these companies it seemed only CyberPower offered the closest to what I was looking for.  APC really doesn’t have much with a good LCD interface in the 900VA to 1500VA range.  Their best bet would probably be the BR1500G.  I’ve read a lot of reviews from various sites and had some concerns with the circuit design, noise of fans, and even reliability.  From Tripp Lite the OMNI line maybe has a good LCD display but again there were many things in the reviews that scared me a bit, reliability being near the top.

Looking closer at the CyberPower line there were 2 units that seemed really good, the CP1500AVRLCD and the CP1500PFCLCD.  These had the most consistent positive reviews across the board and from those, reliability and support seemed above what users experience with APC and Tripp Lite.  These units seemed much more “up-to-date” to me.  I feel CyberPower cares more about this specific market than APC or Tripp Lite and focus on it with most of their resources.  I like when a company gets directly involved with their users via comments or forums that aren’t even a part of their own web domain.  I found this with CyberPower while reading a review from NewEgg.  The user left a negative review about the CP1500AVRLCD not working with the computer they had connected to it.  A representative from CyberPower responded to this:

Manufacturer Response:

Thank you for your comments. You are correct that systems using a power supply with Active PFC (including ENERGY STAR 5.0 systems) may experience issues with a non-sine wave UPS. As a result, CyberPower introduced the Adaptive Sinewave UPS line to address these issues. The CP1500PFCLCD w/Pure Sine Wave provides the most cost effective UPS solution for systems using power supplies with Active PFC.

To assist customers with purchasing decisions, CyberPower lists detailed specifications on our website (cyberpowersystems.com) where waveform types are listed. As our packaging evolves, we also review the information we place on the box. With Active PFC power supplies becoming more prevalent, we will be reviewing the best way to help customers select the right product for them and addressing it with future packaging as well as on-line information.

If you need additional assistance, please contact CyberPower Technical Support at 877-297-6937 or email priority1@CPSww.com.

In fact it was this response that even led me to looking at the CP1500PFCLCD as the AVR line was the only model showing up in my searches for  a LCD UPS.  These models look almost identical but have one big difference, the PFC model actually outputs a pure sine wave of power to the connected devices instead of the power direct from your circuit.  This is needed with newer and greener power supplies that use Active PFC technology.  I really can’t think of a reason to go with the AVR model since non active PFC power supplies will still work fine with the PFC model and be guaranteed cleaner power at that.  The one downside (if you change your perspective) is that with the UPS’s that output a pure sine wave they will jump to battery power when they detect the incoming voltage fall above or below specific ranges.  These ranges are much more sensitive than what it would take for a unit without this technology to fall back to its battery.  For example, I’ve had a laser printer on the same circuit as my UPS for years.  With the CP1500PFCLCD every time the printer warms up to print it draws a lot of power on the line and UPS alarms and goes to battery for a few seconds until the voltage returns.  With my older APC UPS’s that didn’t not have this technology, they never went to battery power during this warm up.  Usually this sensitivity is adjustable and with CyberPower you can do it right from the LCD interface, on a Tripp Lite model you need to slowly turn a potentiometer in the rear of the unit.  Of course the lower you set the sensitivity the less clean power you are guaranteeing to your connected devices.  I now have the CyberPower muting all alarms until I can upgrade my printer to one not so old and power hungry.  All alarms are muted unless a situation arises with only 5% of battery left, then it will audibly alert again.  This is also a nice feature not found from other companies, sometimes it is unusually difficult to simply mute UPS alarms.

The only real option I like to have available to me that CyberPower fell short on is the ability to add an extended battery pack to these units to increase uptime when the power is out.  Outside of that missing option these CyberPower units, with good reviews, a company that looks involved, great feature list, and one of the most modern designs, should really earn your consideration if you are in the market to replace an older UPS.  I know a lot of people that just know the APC name and search solely for a new APC UPS but I think better products are now out there and you could be impressed with the cyberformance.