Amazon Web Services (AWS) provides a really great service-oriented way of creating virtual machines in the cloud with their Elastic Cloud Compute (EC2) system. There’s many reasons you’d want to increase or decrease the size of an EC2 instance on AWS. Maybe you misjudged how much traffic you’d be getting, or maybe you need more horsepower to finish a certain workload in a shorter time.
Increased instance sizes on AWS of course come with a higher price tag, but depending on what you need them for, the increased performance could pay for itself.
So let’s say you chose an
m3.small instance when setting up your first EC2 box and suddenly realized you need more horsepower and decide to increase to an
m3.xlarge instance. Luckily AWS lets you do this with almost no downtime for your original box (it just needs to reboot once).
To create a clone of an existing EC2 box, you need to first create an Amazon Machine Image (AMI) of it. These images allow for easy porting of a machine to other instance sizes, or maybe you just want to snapshot your machine in order to kill it and bring it back at a later time.
Right click your instance and choose “Create Image”.
After clicking Create Image, you will be presented with a dialog like the one pictured below. Enter a name and description that will allow you to remember what this AMI was for when you look it up in the future.
When you click the big blue “Create Image” button the process to create your image will start – this can take a few minutes, so go grab a coffee. One other important thing to note is that this action will cause your instance to reboot. You can check the “No Reboot” option, but here’s what Amazon has to say about that:
By default, Amazon EC2 shuts down the instance, takes snapshots of any attached volumes, creates and registers the AMI, and then reboots the instance. Select No reboot if you don’t want your instance to be shut down.
If you select No reboot, we can’t guarantee the file system integrity of the created image.
So it’s safer to allow AWS to reboot your EC2 instance in this situation.
Once the AMI is created (you can check the progress on the “AMIs” screen), you can fire up your new instance with a few clicks. Navigate to the AMIs screen and right click your AMI and select “Launch”. At this point you’ll be taken through the normal instance creation process – this is where you would select your
m3.xlarge instance in my example. You will also have the opportunity to add more storage and create a new keypair if you want (you can just reuse the keypair you used for your original instance if you want).
Once your new machine comes up, remember that it will have a different DNS name from your original box, so update any saved SSH/RDP connections you might have. After making sure that your shiny new instance is good to go, you can feel free to terminate your original box.
If you had your original instance tucked behind a load balancer with a twin instance (twinstance?), you could obviously have done this done this all with zero downtime, just perform this procedure on each of your instances one at a time, and add the new instances to the load balancing group with the old ones and eventually terminate the old ones.Tags: aws, cloud, ec2, virtual machines
I used the free Dynamic DNS (DDNS) service from Dyn since about 2006 and never had a single issue with it. That all changed when they phased out their free accounts. I was forced to find an alternative, so I went with No-IP.com which was easy to set up and provided a great service.
Recently, No-IP has been having some legal troubles that seem to be revolving around Microsoft’s crusade to rid the world of spammers/scammers/malware/botnets. My hostname was one of the ones that was nixed by Microsoft’s overly broad court order. I’m sure MSFT could have just worked with No-IPs abuse team and taken down only the offending domains – but I’m not going to get into rant about that.
So, I did what any self-respecting hacker does in this situation and decided to roll my own. I was already familiar with Amazon’s Route53 service so I figured why not? They have a nice REST API with granular access controls, as well as a command-line client that makes interacting with said API a breeze.
You can install the AWS command line client really easily using pip:
# pip install awscli
Note: if you don’t have pip installed, try:
# easy_install pip
Then retry the pip command. If you’re still having a hard time, just follow the official instructions here: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html.
Make sure you’ve got it installed properly by running
aws --version. Next, you need to set up your access credentials using
This can be a bit complicated if you’re not familiar with the various ins and outs of DNS. The first step is to log into your AWS console and go to the Route53 service.
Next, create a new Hosted Zone by clicking “Create Hosted Zone”.
Once you’ve created the zone, select it in the list and you should see a list of properties show up on the right hand side. One of the properties is called Delegation Set – make note of these 4 addresses – you will need them once you’ve finished setting up all your DNS records.
This step will vary from domain to domain depending on what you need. The fastest way is to use the Import Zone File button if your current DNS provider allows for easy exporting of your existing zone file (mine did not – urgh). If not, you’ll need to create them all manually.
Your domain registrar should have somewhere in their admin panel that will allow you to change your nameserver records to point at your new Route53 nameservers instead.
You need to use the 4 addresses that were provided as your Delegation Set back up in Step 2.
Now for the fun bit. Before anyone gives me grief for not coding my own REST client, I did this all in about an hour one evening and it works well. There are a number of different ways it could be accomplished. REST APIs are easily consumable by just about every programming language out there – I just decided to do a BASH script for the sake of time.
This script is designed to be run either manually as needed, or on a schedule using something like
I won’t go into explaining line by line what this does, but the gist of it (get it?) is that it goes out to icanhazip.com to get the current IP address, makes sure it’s valid, compares it to the last one that it got, and if it changed then it updates the Route53 Record Set using the
awscli tool. It logs to a file called
update-route53.log every time it runs and stores the last IP it got in a file called
Feel free to poke fun at my BASH skills.
To make this run every 30 minutes, I added this to my crontab (using
*/30 * * * * /home/will/scripts/update-route53.sh
The Amazon Route53 Pricing Page does a pretty good job of explaining it – basically you’re looking at about $0.60 – $1.00/month depending on your site’s traffic.
I recently got a new MacBook Pro as my new work machine. As someone who’s never used a Mac for any serious length of time, it was quite a culture shock.
My job involves a lot of Java web development. Lots of compiles, debugging, testing etc. I used Intellij IDEA on Windows, and it works great (and looks much nicer) on OSX so the hardest part was getting used to a whole new set of hotkeys.
Server administration over SSH is actually easier on OSX due to the built in SSH functionality in the terminal. The only downside is now you have to find an alternative method for maintaining all your bookmarks (tip: use Shuttle, it’s linked below).
All work and no play makes Will a dull boy – sometimes I like to break out the video games and listen to music as well.
Some of my favorite Steam games (CS:GO, Borderlands 2, FTL) are also available through Steam on OSX and they perform amazingly on this machine.
UPDATE: I decided to do Bootcamp with my OSX partition so that I could play more of the games that I own on Steam – namely PAYDAY2 and Skyrim that don’t work natively on OSX via Steam.
I used to try and avoid iTunes like the plague, but on OSX it actually doesn’t suck. It plays all my music with no issues and has some really nice features – not to mention OS integration. I just pointed at my music library and it was good to go. It even cleaned up some missing album art here and there.
Some steps I took that made the migration easier
brew install bash-completion
brew install git(do yourself a favor and don’t use the official OSX installer from the git website)
brew install git-extras
brew install wget
brew install puttyto get
puttygenwhich can convert your
.ppkfiles into OpenSSH private keys
As far as moving files over, most of my important stuff is in Dropbox and Google drive anyway, so I didn’t need to copy much over. I have a number of VMs which run on VirtualBox (also available on OSX), so I copied those over and that was it.
If you have your Dropbox fully synced on another machine on your LAN, it will just pull the files over your local network instead of the internet which vastly decreases sync time.
I’ll keep this post updated with any other new tidbits I uncover, but all in all, I’m really happy with the switch so far.Tags: mac, osx, tips, windows
The Sender Policy Framework (SPF) is an attempt to mitigate certain types of spam – specifically spam where the sender masquerades as a different sender. Technically, you can put whatever you want in the
From: header of an email message, so you can pretend to be sending emails from
facebook.com simply by putting something like
From: firstname.lastname@example.org in your email’s headers.
Email relay servers prevent this by looking up the sender’s domain’s SPF record (defined in DNS records). The SPF record tells the mail server “here are some originating IP addresses that are legit, if a message arrives pretending to be from this domain, make sure the originating IP address is on this list”.
If you happen to be sending emails through Google Apps and Amazon SES (e.g. automated system emails via SES and “real” person-to-person emails via GMail), you need to ensure that your SPF record is set up to allow for both domains.
So here’s how: put this in your DNS system’s
SPF record (why both?):
"v=spf1 include:amazonses.com include:_spf.google.com -all"
-all denotes that mail servers should not trust emails from anywhere else but the defined domains. If you’re unsure or believe that you might need to send mail via other domains as well, use
~all (soft fail).
Heartbleed is a bug in the OpenSSL library that was publicly disclosed on April 7th, 2014 by an internet security firm called Codenomicon. With OpenSSL being the defacto SSL library in both the Apache and nginx webservers, that potentially exposes about two thirds of the internet. If we exclude the websites that don’t use SSL at all, we are left with a nice round number: half a million.
Half a million websites exposed to a security hole of this magnitude is completely unheard of in modern history. The mad scramble to get this hole plugged has been ongoing since the disclosure and has involved some major players with Amazon AWS, GitHub, Google and many more putting out announcements about their remediation efforts in the last 48 hours. Here in Canada, the CRA (Canada Revenue Agency) actually shut down all its online filing systems until they could get it sorted out. Maybe a little late, but at least they’re on top of it. Not bad for a monolithic federal department.
The cloud hosting company CloudFlare apparently got wind of this bug about a week before anyone else which begs the question – why did they get told before anyone else? Why didn’t they or the researchers notify any of the major linux distributions? NDAs? We might never know, but either way this does not sound like a “responsible disclosure” to me. When security bugs like this are actually disclosed responsibly, poor sysadmins aren’t up at 3am building custom RPMs while frantically revoking as many SSL certs as they possibly can.
The cause of the Heartbleed bug (a.k.a CVE-2014-0160) was probably one of the rookiest mistakes you can make as a C programmer. Anyone who’s ever written C code before will know the pain caused by not doing bounds checks when performing any kind of memory access. That’s exactly what happened here.
In my opinion, the really sad part about all of this is that this bug has been floating around for over 2 years. Anyone who was savvy enough, and wanted to attack someone using this exploit enough will already have done so. The effects of this security breach are going to be felt for years as slow hosting companies neglect to upgrade in time, SSL certificate keys become compromised, people’s account details get stolen and so on and so on.
The key things for sysadmins to do right now is upgrade their versions of
openssl ASAP. Decent system administrators will get this done NOW (maybe sooner – get a time machine since we’re 2 days in) using official channels or recompiling them yourself with the
OPENSSL_NO_HEARTBEATS flag enabled. Good system administrators will also revoke their SSL certificates and issue new ones. Great sysadmins will also revoke/replace their SSL certificates with brand new ones generated using brand new shiny private keys since the old ones should be considered compromised as well.
Imagine a smart (but evil) attacker setting up a standard Man-in-the-middle attack and pretending to be your bank’s online web portal. Normally this can’t be done because your computer trusts this web portal because it identifies itself as legitimately being your bank using SSL (or TLS more specifically). The attacker can’t impersonate your bank because he doesn’t have the private keys necessary to use the SSL cert. Now imagine that your bank was vulnerable to Heartbleed. The attacker is now able to read off arbitrary 64k blocks of your bank’s webserver’s RAM and given enough time could potentially recover your bank’s private keys.
Now our attacker has the cert and the private key and can set up shop wherever he likes posing as your bank. All that’s left to do at that point is let the logins happen and he can harvest usernames/passwords/sessions to his heart’s content.
So from a consumer point of view, any site on which you care a lot about security (or any site at all really) -
change your passwords yesterday! wait until the service has contacted you and informed you that the vulnerability is fixed, and then change your password! Need me to repeat it? CHANGE YOUR PASSWORDS NOW.
Update: I jumped on the “change your passwords now” bandwagon, but as a friend pointed out to me it’s vitally important to make sure that the services you use have fixed the vulnerability BEFORE changing your passwords. Wait for them to contact you telling you that they are no longer vulnerable.
To connect to a box on your network that is running Oracle Database, you will first need to allow connections to Oracle through your firewall.
If you’re running CentOS, RHEL, Fedora or any other Linux variant that uses
iptables, use the following commands to create a firewall exception (Assuming you’re running your listener on port 1521 – check with
sudo lsnrctl status):
$ sudo iptables -I INPUT -p tcp --dport 1521 -j ACCEPT
Or to limit the connections to a specific IP address – e.g.
192.168.1.20 or an IP block – e.g.
192.168.1.0/24 use the
Read the rest of this post…
If you’re running a high-availability system of some kind, chances are you are into some sort of Load Balancing. If you happen to be writing a Java app, and happen to be using Apache Tomcat as your servlet container, then this tip is for you.
I had a system which needed to be HTTPS-only but also have the SSL terminated at the load balancer. Naturally, I forwarded the HTTP and HTTPS ports on my Elastic Load Balancer and had my application configured to redirect any insecure connections to an SSL connection. I started having a couple of strange issues where occasionally it would leave the connection on HTTP when it should have been redirecting.
My setup was basically:
HTTP (80) -----> ELB -----> Tomcat (8080) HTTPS (443) -----> ELB -----> Tomcat (8080)
Turned out, I needed to set a couple of extra options in my Tomcat HTTP Connector section. This was the combination of options that did it for me:
Read the rest of this post…
From time to time, I have been known to accidentally type my password into a “username” prompt in a
bash shell. In that situation, the password you entered is now a part of your
~/.bash_history file forever, unless you truncate or redact it.
A quick command to do this is
$ history -c
Don’t forget to end your session ASAP as your password will still be stored in memory until you do.
For the truly paranoid (like me), I also recommend changing your password right away, in the eventuality that someone was snooping your session at the exact time that you happened to enter your password in plain text.
Now, where’s my tinfoil hat?Tags: bash, linux, security