Resize your EC2 instances with minimal downtime

Posted on July 15, 2014 | 2 comments

Amazon Web Services (AWS) provides a really great service-oriented way of creating virtual machines in the cloud with their Elastic Cloud Compute (EC2) system. There’s many reasons you’d want to increase or decrease the size of an EC2 instance on AWS. Maybe you misjudged how much traffic you’d be getting, or maybe you need more horsepower to finish a certain workload in a shorter time.

Increased instance sizes on AWS of course come with a higher price tag, but depending on what you need them for, the increased performance could pay for itself.

So let’s say you chose an m3.small instance when setting up your first EC2 box and suddenly realized you need more horsepower and decide to increase to an m3.xlarge instance. Luckily AWS lets you do this with almost no downtime for your original box (it just needs to reboot once).

Create an AMI of your existing box

To create a clone of an existing EC2 box, you need to first create an Amazon Machine Image (AMI) of it. These images allow for easy porting of a machine to other instance sizes, or maybe you just want to snapshot your machine in order to kill it and bring it back at a later time.

Right click your instance and choose “Create Image”.

Create AMI

Amazon Machine Images (AMI) are snapshots of an entire EC2 machine

Give it a name

After clicking Create Image, you will be presented with a dialog like the one pictured below. Enter a name and description that will allow you to remember what this AMI was for when you look it up in the future.

Create AMI

Give your new AMI a meaningful name and description

When you click the big blue “Create Image” button the process to create your image will start – this can take a few minutes, so go grab a coffee. One other important thing to note is that this action will cause your instance to reboot. You can check the “No Reboot” option, but here’s what Amazon has to say about that:

By default, Amazon EC2 shuts down the instance, takes snapshots of any attached volumes, creates and registers the AMI, and then reboots the instance. Select No reboot if you don’t want your instance to be shut down.

Warning
If you select No reboot, we can’t guarantee the file system integrity of the created image.

So it’s safer to allow AWS to reboot your EC2 instance in this situation.

Launch your new resized instance

Once the AMI is created (you can check the progress on the “AMIs” screen), you can fire up your new instance with a few clicks. Navigate to the AMIs screen and right click your AMI and select “Launch”. At this point you’ll be taken through the normal instance creation process – this is where you would select your m3.xlarge instance in my example. You will also have the opportunity to add more storage and create a new keypair if you want (you can just reuse the keypair you used for your original instance if you want).

Launch new instance

Launching your new and improved EC2 instance is as easy as the click of a mouse

Once your new machine comes up, remember that it will have a different DNS name from your original box, so update any saved SSH/RDP connections you might have. After making sure that your shiny new instance is good to go, you can feel free to terminate your original box.

Bonus Points

If you had your original instance tucked behind a load balancer with a twin instance (twinstance?), you could obviously have done this done this all with zero downtime, just perform this procedure on each of your instances one at a time, and add the new instances to the load balancing group with the old ones and eventually terminate the old ones.

Tags: , , ,

Roll your own dynamic DNS service using Amazon Route53

Posted on July 3, 2014 | 11 comments

I used the free Dynamic DNS (DDNS) service from Dyn since about 2006 and never had a single issue with it. That all changed when they phased out their free accounts. I was forced to find an alternative, so I went with No-IP.com which was easy to set up and provided a great service.

Recently, No-IP has been having some legal troubles that seem to be revolving around Microsoft’s crusade to rid the world of spammers/scammers/malware/botnets. My hostname was one of the ones that was nixed by Microsoft’s overly broad court order. I’m sure MSFT could have just worked with No-IPs abuse team and taken down only the offending domains – but I’m not going to get into rant about that.

So, I did what any self-respecting hacker does in this situation and decided to roll my own. I was already familiar with Amazon’s Route53 service so I figured why not? They have a nice REST API with granular access controls, as well as a command-line client that makes interacting with said API a breeze.

Step 1: Install awscli

You can install the AWS command line client really easily using pip:

# pip install awscli

Note: if you don’t have pip installed, try:

# easy_install pip

Then retry the pip command. If you’re still having a hard time, just follow the official instructions here: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html.

Make sure you’ve got it installed properly by running aws --version. Next, you need to set up your access credentials using aws configure.

Step 2: Set up your hosted zone on Route53

This can be a bit complicated if you’re not familiar with the various ins and outs of DNS. The first step is to log into your AWS console and go to the Route53 service.

Next, create a new Hosted Zone by clicking “Create Hosted Zone”.

Creating a Hosted Zone in Route53

Creating a Hosted Zone in Route53

Once you’ve created the zone, select it in the list and you should see a list of properties show up on the right hand side. One of the properties is called Delegation Set – make note of these 4 addresses – you will need them once you’ve finished setting up all your DNS records.

Step 3: Populate your DNS records

This step will vary from domain to domain depending on what you need. The fastest way is to use the Import Zone File button if your current DNS provider allows for easy exporting of your existing zone file (mine did not – urgh). If not, you’ll need to create them all manually.

Step 4: Update your Nameservers

Your domain registrar should have somewhere in their admin panel that will allow you to change your nameserver records to point at your new Route53 nameservers instead.

You need to use the 4 addresses that were provided as your Delegation Set back up in Step 2.

Step 5: Use the BASH, Luke

Now for the fun bit. Before anyone gives me grief for not coding my own REST client, I did this all in about an hour one evening and it works well. There are a number of different ways it could be accomplished. REST APIs are easily consumable by just about every programming language out there – I just decided to do a BASH script for the sake of time.

This script is designed to be run either manually as needed, or on a schedule using something like cron.

I won’t go into explaining line by line what this does, but the gist of it (get it?) is that it goes out to icanhazip.com to get the current IP address, makes sure it’s valid, compares it to the last one that it got, and if it changed then it updates the Route53 Record Set using the awscli tool. It logs to a file called update-route53.log every time it runs and stores the last IP it got in a file called update-route53.ip.

Feel free to poke fun at my BASH skills.

Optional Step 6: Set up your crontab

To make this run every 30 minutes, I added this to my crontab (using crontab -e):

*/30 * * * * /home/will/scripts/update-route53.sh

How much does it cost?

The Amazon Route53 Pricing Page does a pretty good job of explaining it – basically you’re looking at about $0.60 – $1.00/month depending on your site’s traffic.


Notes on switching to OSX from Windows

Posted on June 19, 2014 | no comments

I recently got a new MacBook Pro as my new work machine. As someone who’s never used a Mac for any serious length of time, it was quite a culture shock.

Work

IDEA plays really well with OSX

IDEA plays really well with OSX

My job involves a lot of Java web development. Lots of compiles, debugging, testing etc. I used Intellij IDEA on Windows, and it works great (and looks much nicer) on OSX so the hardest part was getting used to a whole new set of hotkeys.

Server administration over SSH is actually easier on OSX due to the built in SSH functionality in the terminal. The only downside is now you have to find an alternative method for maintaining all your bookmarks (tip: use Shuttle, it’s linked below).

Play

All work and no play makes Will a dull boy – sometimes I like to break out the video games and listen to music as well.

Gaming

Some of my favorite Steam games (CS:GO, Borderlands 2, FTL) are also available through Steam on OSX and they perform amazingly on this machine.

I can still get my sweet sweet loot

I can still get my sweet sweet loot

UPDATE: I decided to do Bootcamp with my OSX partition so that I could play more of the games that I own on Steam – namely PAYDAY2 and Skyrim that don’t work natively on OSX via Steam.

Music

I used to try and avoid iTunes like the plague, but on OSX it actually doesn’t suck. It plays all my music with no issues and has some really nice features – not to mention OS integration. I just pointed at my music library and it was good to go. It even cleaned up some missing album art here and there.

Make the transition easier

Some steps I took that made the migration easier

  • Install Homebrew as soon as you can.
    • brew install bash-completion
    • brew install git (do yourself a favor and don’t use the official OSX installer from the git website)
    • brew install git-extras
    • brew install wget
  • Install Shuttle to replace PuTTY and manage your SSH bookmarks
    • Use brew install putty to get puttygen which can convert your .ppk files into OpenSSH private keys
  • Use the built in Mail client (it’s really good)
  • Use the built in Calendar app (it’s also really good)
  • Use Sublime Text instead of TextEdit

Files & Data

As far as moving files over, most of my important stuff is in Dropbox and Google drive anyway, so I didn’t need to copy much over. I have a number of VMs which run on VirtualBox (also available on OSX), so I copied those over and that was it.

If you have your Dropbox fully synced on another machine on your LAN, it will just pull the files over your local network instead of the internet which vastly decreases sync time.

I’ll keep this post updated with any other new tidbits I uncover, but all in all, I’m really happy with the switch so far.

Tags: , , ,

Setting up SPF records for Google Apps and Amazon SES

Posted on April 21, 2014 | 4 comments

The Sender Policy Framework (SPF) is an attempt to mitigate certain types of spam – specifically spam where the sender masquerades as a different sender. Technically, you can put whatever you want in the From: header of an email message, so you can pretend to be sending emails from facebook.com simply by putting something like From: no-reply@facebook.com in your email’s headers.

Email relay servers prevent this by looking up the sender’s domain’s SPF record (defined in DNS records). The SPF record tells the mail server “here are some originating IP addresses that are legit, if a message arrives pretending to be from this domain, make sure the originating IP address is on this list”.

Using GMail and SES?

If you happen to be sending emails through Google Apps and Amazon SES (e.g. automated system emails via SES and “real” person-to-person emails via GMail), you need to ensure that your SPF record is set up to allow for both domains.

So here’s how: put this in your DNS system’s TXT and SPF record (why both?):

"v=spf1 include:amazonses.com include:_spf.google.com -all"

Note: the -all denotes that mail servers should not trust emails from anywhere else but the defined domains. If you’re unsure or believe that you might need to send mail via other domains as well, use ~all (soft fail).

Links

Tags: , , , ,

What is Heartbleed and why do I care?

Posted on April 9, 2014 | no comments
heartbleed

Credit: http://heartbleed.com

Heartbleed is a bug in the OpenSSL library that was publicly disclosed on April 7th, 2014 by an internet security firm called Codenomicon. With OpenSSL being the defacto SSL library in both the Apache and nginx webservers, that potentially exposes about two thirds of the internet. If we exclude the websites that don’t use SSL at all, we are left with a nice round number: half a million.

Half a million websites exposed to a security hole of this magnitude is completely unheard of in modern history. The mad scramble to get this hole plugged has been ongoing since the disclosure and has involved some major players with Amazon AWS, GitHub, Google and many more putting out announcements about their remediation efforts in the last 48 hours. Here in Canada, the CRA (Canada Revenue Agency) actually shut down all its online filing systems until they could get it sorted out. Maybe a little late, but at least they’re on top of it. Not bad for a monolithic federal department.

The cloud hosting company CloudFlare apparently got wind of this bug about a week before anyone else which begs the question – why did they get told before anyone else? Why didn’t they or the researchers notify any of the major linux distributions? NDAs? We might never know, but either way this does not sound like a “responsible disclosure” to me. When security bugs like this are actually disclosed responsibly, poor sysadmins aren’t up at 3am building custom RPMs while frantically revoking as many SSL certs as they possibly can.

How did this happen?

The cause of the Heartbleed bug (a.k.a CVE-2014-0160) was probably one of the rookiest mistakes you can make as a C programmer. Anyone who’s ever written C code before will know the pain caused by not doing bounds checks when performing any kind of memory access. That’s exactly what happened here.

What’s the damage?

In my opinion, the really sad part about all of this is that this bug has been floating around for over 2 years. Anyone who was savvy enough, and wanted to attack someone using this exploit enough will already have done so. The effects of this security breach are going to be felt for years as slow hosting companies neglect to upgrade in time, SSL certificate keys become compromised, people’s account details get stolen and so on and so on.

What do we sysadmins do about it?

The key things for sysadmins to do right now is upgrade their versions of libssl and openssl ASAP. Decent system administrators will get this done NOW (maybe sooner – get a time machine since we’re 2 days in) using official channels or recompiling them yourself with the OPENSSL_NO_HEARTBEATS flag enabled. Good system administrators will also revoke their SSL certificates and issue new ones. Great sysadmins will also revoke/replace their SSL certificates with brand new ones generated using brand new shiny private keys since the old ones should be considered compromised as well.

Wait, why do I care again?

Imagine a smart (but evil) attacker setting up a standard Man-in-the-middle attack and pretending to be your bank’s online web portal. Normally this can’t be done because your computer trusts this web portal because it identifies itself as legitimately being your bank using SSL (or TLS more specifically). The attacker can’t impersonate your bank because he doesn’t have the private keys necessary to use the SSL cert. Now imagine that your bank was vulnerable to Heartbleed. The attacker is now able to read off arbitrary 64k blocks of your bank’s webserver’s RAM and given enough time could potentially recover your bank’s private keys.

Now our attacker has the cert and the private key and can set up shop wherever he likes posing as your bank. All that’s left to do at that point is let the logins happen and he can harvest usernames/passwords/sessions to his heart’s content.

So from a consumer point of view, any site on which you care a lot about security (or any site at all really) – change your passwords yesterday! wait until the service has contacted you and informed you that the vulnerability is fixed, and then change your password!

What do regular people do about it?

Need me to repeat it? CHANGE YOUR PASSWORDS NOW.

Update: I jumped on the “change your passwords now” bandwagon, but as a friend pointed out to me it’s vitally important to make sure that the services you use have fixed the vulnerability BEFORE changing your passwords. Wait for them to contact you telling you that they are no longer vulnerable.

Links

 

 


Adding firewall rules for Oracle Database using iptables

Posted on March 18, 2014 | no comments

To connect to a box on your network that is running Oracle Database, you will first need to allow connections to Oracle through your firewall.

If you’re running CentOS, RHEL, Fedora or any other Linux variant that uses iptables, use the following commands to create a firewall exception (Assuming you’re running your listener on port 1521 – check with sudo lsnrctl status):

$ sudo iptables -I INPUT -p tcp --dport 1521 -j ACCEPT

Or to limit the connections to a specific IP address – e.g. 192.168.1.20 or an IP block – e.g. 192.168.1.0/24 use the -s option:
Read the rest of this post…

Tags: , , , , , ,

Apache Tomcat with SSL behind Amazon ELB

Posted on January 27, 2014 | 2 comments

If you’re running a high-availability system of some kind, chances are you are into some sort of Load Balancing. If you happen to be writing a Java app, and happen to be using Apache Tomcat as your servlet container, then this tip is for you.

I had a system which needed to be HTTPS-only but also have the SSL terminated at the load balancer. Naturally, I forwarded the HTTP and HTTPS ports on my Elastic Load Balancer and had my application configured to redirect any insecure connections to an SSL connection. I started having a couple of strange issues where occasionally it would leave the connection on HTTP when it should have been redirecting.

My setup was basically:

  HTTP (80) -----> ELB -----> Tomcat (8080)
HTTPS (443) -----> ELB -----> Tomcat (8080)

Turned out, I needed to set a couple of extra options in my Tomcat HTTP Connector section. This was the combination of options that did it for me:
Read the rest of this post…

Tags: , ,

Redacting accidental password entries from your BASH history

Posted on August 5, 2013 | no comments

From time to time, I have been known to accidentally type my password into a “username” prompt in a bash shell. In that situation, the password you entered is now a part of your ~/.bash_history file forever, unless you truncate or redact it.

A quick command to do this is

$ history -c

Don’t forget to end your session ASAP as your password will still be stored in memory until you do.

For the truly paranoid (like me), I also recommend changing your password right away, in the eventuality that someone was snooping your session at the exact time that you happened to enter your password in plain text.

Now, where’s my tinfoil hat?

Tags: , ,