Thursday, March 31, 2016

ES6 Modules: The End of Civilization As We Know It?

This article shares some techniques and tools for building web apps using future friendly ES6 module syntax. We begin with an exploration of current module formats and ways to work both forwards and backwards in time.The complete code for this article can be found here.
For many years JS had a single widely accepted module format, which is to say, there was none. Everything was a global variable petulantly hanging off the window object. This invited risky propositions, too sweet to ignore, and some of us began monkey patching built in objects. Chaos prevailed. We had run amuck. The JS of that era was nightmarish, intertwined, lacking order and utterly without remorse.

Dark Ages

Long ago an adhoc group formed to solve the global conflict. The fruits of this vigilante justice are known today as CommonJS. Multiple competing formats were proposed and implemented in the wild by these dashing radicals and two bright lights emerged with significant adherents: AMD andCJS.
Asynchronous Module Design (AMD) accounts for the async nature of JS but some felt the aesthetics were harder to read with a wrapper function.
CommonJS (CJS) is synchronous, thus blocking, but generally understood to be an easier read.
Node famously chose CJS but the browser adherents flocked to AMD due to the nonblocking nature and dynamic friendly loading. Some view these technologies at odds, but together they prevailed and JS code became clear, easier to consume and compose. Chaos had been delayed by these twin forces of good.

Happening Right Now

At this point in time, Dec 2014, no module format actually matters if the developer is willing to pay for a build step that re-compiles the packages into a runtime source. We even retain debugging properties with sourcemaps. Dramatic scenarios staging CJS versus AMD end with the uninteresting conclusion being: “it depends on your project needs”.

Formal Standardization

While the primitive chaos of globals were being held back by the medieval outlaws of CJS and AMD a rational order of JavaScript vendors and concerned citizens began formally standardizing modules into the language proper.
JS is often called JavaScript but is more correctly referred to as ECMAScript (ES) as it is the ECMA-262 standard. After years of thrashing, a standard module format has finally emerged with ES version 6 (ES6).
Some people ❤ ES6 modules and some not-so-much. But as usual, neither camp actually matters: ES6 modules are happening and you can anticipate adoption. Great news: you can start now and compile to any module format of your choosing.

Setup

I’m going to author an ES6 module using package.json for manifest. We’ll use NodeJS and package manager npm to as our tool of choice for constructing the example.
This assumes you have Node installed and a BASH friendly shell. Open up your terminal and enter the following commands:
https://gist.github.com/brianleroux/b9cced91d3102d6f2b83
Our package is just a simple directory with plain text files.
You end up with a very easy-to-reason-about package structure composed of plain directories and plain text files. No special editors or tools needed.

Authoring

Our next step is to write some code, and setup the 6to5 compiler. [Update:6to5 has been recently renamed to Babel.] Let’s start with a simple echoing function written in basic ES.
https://gist.github.com/brianleroux/b4f7d9c963bd79db1ef9
The shiny new ES syntax really surfaces the hidden beauty of JS. (I couldn’t be happier about this.)Like a butterfly it emerges from the cocoon of browser stagnation with poignant lucidity. If ES6 syntax freaks you out read this one pager. The quick explanation, we define a function called echo that accepts one parameter (called value), we return it concatenated with a string (thus casting it toString). We then export that function as the module default.

Compiling

https://gist.github.com/brianleroux/d66ac6c070cff500d3d0
Right now most JavaScript runtimes are still mostly ES5 based so we are going to compile our ES6 source code back to ES5 so it runs everywhere. In the future, we’ll be able to remove this step. Yay!
I set up the compilation in my package.json file as an npm script under the key compile. I’m using 6to5, for reasons I’ll describe more below, but keep in mind there’s many tools that do this. From my terminal I can now invokenpm run compile and it will produce valid ES5 code in my project ./distdirectory. The npm packager is super smart and will use the locally installed 6to5 in node_modules.
Many modules, 6to5 and Traceur included, encourage you to globally install them. This is a fragile practice as you might have multiple projects with dependencies on different versions of the same module. Rule of thumb: do not install Node packages globally if you can reasonably avoid it. You can almost always reasonably avoid it.

Testing in the Node runtime

Test libraries come in many shapes and sizes. I happen to enjoy Tape but you may have found happiness elsewhere. That’s cool. You can view the whole source of this ES6 echo module on my Github. Clone the repo, run npm i to get the dependencies and finally npm test to check out the results.
For fun, I wrote the tests as ES6 JavaScript.
We import the tape modules and our own echo function using a relative path to package.json. The fat arrow => is just a shortcut for the word function. Otherwise this is the regular JS you are accustomed to. To run the tests, we issue the command: npm test. The script uses 6to5-node to automatically evaluate and execute the ES6 authored source. Note the ‘main’ key in package.json points to the compiled source in dist. Any modules including this module will use that path.
Testing a module has become a very low barrier!

Testing in Browser runtimes

Let’s get this running in the browser. Before we test in a browser we’ll need to compile our source code for the browser runtime. This will be two steps: first we compile to ES5 and then we need to make the Node CJS module system browser compatible.
We’ll be using Browserify. Our build script gets updated to compile the tests to ES5, then compile the ES5 source for the browser. But we need a webpage to view this in! Here’s the one I added in test/index.html:
Note: we include the generated test-echo-browser.js script just below a div with the id=”out” (line 8). We’ll use that div to display the output of our tests.
To see those results we’ll npm i browserify-tape-spec —save-dev and import (line 3) the module and then use it at the bottom of our tests (line 21). If we’re running in the browser: pipe the results through our reporter and into that div.
This is low effort for a test rig. You can wire up Travis and Sauce Labs for maximal continuous integration should the module warrant it. We can add this stuff with time, there is no need to roll out the red carpet for an echo function.
Test output in Firefox Developer Edition

Debugging

Sourcemaps to the rescue! Most compilers have an option for generating sourcemaps. Open the project in your dev tools of choice, you’ll see the futuristic flavored ES6 source, not the compiled ES5 source. Step debug will fully work.

Publishing

Code isn’t any fun if it only works on your machine. You need to ship it! Ship it real good. I hate myself for writing that.

Github

Most open source projects find themselves on Github. I add source code only and generally ensure compiled outputs are in my .gitignore.

The npm Registry

As with Git, we should only publish the bits relevant to the npm Registry. In this case we will ignore the source code and deliberately publish the compiled ES5 code. To do it we run: npm publish.
Publishing only ES5 source is not intuitive at first, but this way anyone targeting an ES5 runtime can use your ES6 authored module. (Which is everyone, for now.) Otherwise, we have to ask the module consumer to own the ES5 compilation. Chances are they already have a build step and chances are better they don’t want to add your build step to their pipeline.
Simply put, we can’t assume everyone will assume the same things. The only realistic assumption is the target runtime will be ES5 compatible.This will change when ES6 modules roll out but that will take time.

Browserify CDN

A magical side effect of publishing your module to npm is that it will also be available on http://wzrd.in/ which means you can include that module trivially. Browserify CDN exports your module using Browserify and lets you embed it anywhere.
Here’s an example with CodePen. You could embed in JSFiddle or JSBin or anywhere really and just start require-ing (or rather import-ing) your modules. Even better, RequireBin will automatically add scripts you require. Fun stuff.

Web

If we’re building things right then our applications are actually modules themselves. It is trivially easy to publish our web module using Github pages. A fun hack is to use git subtree to publish a compiled directory to your gh-pages branch. (But watch out, using subtree means you have to check generated files into your source control which is not ideal.)

Mobile

Any code that runs in a vanilla web browser ES5 runtime also runs in Cordova based projects like PhoneGap. Install the PhoneGap Developer App for iOSAndroid, and Windows Phone and see for yourself. It is super easy to get started.

Discovery

Consuming modules from other people is fun. Most of the Node surface can be Browserified and most authors now ensure their Node modules are compatible with both Node and browser runtimes. You can search the registry for further browser ready modules and publish your own.

Issues

Of course this isn’t a perfect landscape. There are problems and the solutions have tradeoffs.

Default Key Emergence

In ES modules the keyword default denotes a default thing to export. Authors of Node style modules love to export only a single function. It is not the only way to do things but it is a very popular way. Unfortunately these concepts are not currently super compatible. Fortunately, 6to5 has a reasonable workaround which I anticipate will flow into the other transpilers. This gist demos the behavior:
The side effect emerges in the Node environment because default export is transpiled out to the object literal key default meaning if we are consuming a module in ES5 authored code we have to explicitly var foo = require(‘foo’).default …and most agree this doesn’t smell very nice. This also means other Node modules are harder to use and thus compose, disrupting the opportunity for reuse all the way down the stack.
Some solutions this default key emergence problem:
  • Ignore ES modules until they land in all runtimes and continue to write ES5 CJS module.exports and require syntax (boo! hiss! boring!)
  • Wholesale import * as foo from ‘foo’ and hope the foo package doesn’t change its exports (works, and is probably safe, but feels yolo)
  • Pretend that appending .default is harmless aesthetics … ಠ_ಠ
  • Brutally clobber the problem by appending module.exports = module.default at the end of each module in your ES5 publishing build pipeline (thunderbolt viking style! also yolo)
  • Use the 6to5 compiler with the —modules commonInterop flag which does the expected thing: compile out a single function export to module.exports when there are no other named exports.
While we wait for an optimal native runtime env, workarounds are inbound for various compilers. Compiling to ES5 is what ultimately happens anyhow. I personally find 6to5 for publishing modules to npm the cleanest option, at the moment.

Small Modules

The UNIX philosophy lays down many interesting tenets. One of those tenets is do one thing and do it well. The idea manifests today as an avatar of small modules. Small modules compose better, are easier to reason about and test. It is now a classical concept but this timeless wisdom works well in any language and/or runtime.
Sometimes there is dissent to the idea of small modules with the primary claim being that authoring small modules is somehow at odds with using a framework. Supporters of the concept of small modules digest this baffling negativity and regurgitate with a narrative that frameworks themselves force unnecessary bloat. These concepts are not opposites but complementary; it is ridiculous nonsense to claim otherwise.
Frameworks are not at odds with modularity or discreet units of testable encapsulation, and indeed, all the “major” JS frameworks themselves are composed of small modules. Modules are a good idea.
Frameworks are not evil bastions of bloated lock in. Frameworks curate concepts to create a system symmetry that can enable developers with huge productivity boosts avoiding boilerplate. Frameworks are a good idea.
Frameworks and Small Modules currently coexist and are not mutually exclusive. Polarized debate about these ideas is a waste of time.

Other Minutia

But! AMD! Browserify supports exporting to AMD. There’s tonnes of tools for this and safely classifiable as a deployment concern.
document.write is a blocking call: save a deer! don’t do it!
But! Dynamic modules!
You can use any of these same tools to generate a standalone build and dynamically load it however you want.
However, I should warn you, if youdocument.write a script tag Steve Souders strangles this holiday deer with his bare hands.

Alternatives

I really like the output from 6to5. Google built Traceur, it works well, and is supported in CodePen which is fun. There’s a bunch of these things out there. (Which is great.) Choose whichever compiler path makes sense to you.
Just make sure:
  • You are aware of the default key emergence problem in ES5 CJS runtimes
  • You publish to npm as ES5 source so everyone can benefit from your hard work
There are numerous module-loader-compiler things too. WebpackSpotify Quickstart and SystemJS are interesting alternatives to Browserify.

Summary

You can author, test, consume and deploy modules in the formats that make sense to you. I currently recommend publishing to npm as ES5 for maximal reuse. In this way, a project can then be authored in TypeScript, can consume CoffeeScript and JS modules, and deployed to the web as ES5 AMD. That code can be shared between browser, server and the phone in your pocket. Whatever makes sense for your project needs.
Think about that for a moment. Author in any language, use whatever transport makes sense and enjoy runtime agnostic code. This is the future of web development. That diversity is going to have wonderful side effects we can’t begin to anticipate and I personally find that strangeness very exciting and beautiful. Just like the web itself.

Monday, September 14, 2015

Hosting AngularJS With Amazon S3

So you built your amazing AngularJS application and now you need to host it. Sure you could create a basic Apache or NGINX server to host the application but there is another option. Amazon S3 is just a static resource host which makes it perfect for AngularJS applications since that is all AngularJS contains. Another nice feature is that Amazon S3 can be easily tied into Amazon CloudFront, so that your files can be served through a CDN. This works so long as your AngularJS application is using hash (#) or hashbang (#!) routes. I strongly recommend using the hashbang approach since that is what Google looks for when indexing a single-page application. While I am focusing on AngularJS this will also work with Ember.js, Backbone.js and any other single-page application framework using a similar navigation style.

Basic Setup

This is just a down and dirty way to create a static site with S3. If you need more information on what’s going on, then search for “hosting static site with AWS S3” in your favorite search engine.
  1. Goto the Amazon Web Services Management Console.
  2. Open the S3 control panel.
  3. Create a new bucket with your applications domain name as the name, like app.com.
  4. Now go ahead and upload your application files to the bucket. Make sure when you upload you select Make everything public on the permission step.
  5. Open the property panel for the bucket.
  6. Click Permission and then click Add more permission.
  7. Select Everyone as the Grantee and check List. This will allow a 404 to return.
  8. Save your changes.
  9. Click Static Website Hosting and then Enable website hosting.
  10. You can fill out index.html as the Index Document.
  11. Save your changes
  12. Copy the address listed under Endpoint in the Static Website Hosting panel.
  13. Now goto your application’s domain DNS control panel and add a new CNAME record pointing to the endpoint you got from AWS. It is important that the CNAME domain alias matches what you named your bucket.
  14. That’s it your site should now be working.

Handling Hashbangs

While now you can use your application, but all we really did was just set up a static site. We can go one step further. AngularJS application support routes to control what is displayed to the user. While your site already supports routes as is, it will fail if the user omits the hash or hashbang. For example app.com/#/item/1 works but app.com/item/1 fails. We can fix this using S3 by having it automatically redirect app.com/item/1 to app.com/#/item/1.
Go back to your application’s bucket properties in AWS. Under Static Website Hosting select Edit Redirection Rules. You should now see an empty text box. Use the code below, make sure to replace the HostName with your applications domain. If you want to use the hashbang method for your URLs simply change #/ to #!/.
<RoutingRules>
  <RoutingRule>
    <Condition>
    <HttpErrorCodeReturnedEquals>404</HttpErrorCodeReturnedEquals >
     </Condition>
    <Redirect>
      <HostName>[[ your application's domain name ]]</HostName>
      <ReplaceKeyPrefixWith>#/</ReplaceKeyPrefixWith>
    </Redirect>
  </RoutingRule>
</RoutingRules>
That’s it S3 will now convert URLs into the correct format.

Summary

Congratulations your AngularJS application is now being hosted by Amazon S3. A few things of note before you go. If you want to use CloudFront so your site is hosted via a CDN, the redirection rules won’t be applied by CloudFront if you link directly to the S3 bucket. You’ll have to point CloudFront to the bucket’s end point for it to work properly. I’ll be making a post to discuss setting this up at a later date. Another thing you may consider doing is redirect www to your applications domain in the DNS. Most DNS services have an easy option to make this happen but you’ll have to check your DNS service documentation. This can also work in reverse just make sure your bucket name is www.app.com.

Thursday, September 25, 2014

Everything you need to know about the Shellshock Bash bug

Remember Heartbleed? If you believe the hype today, Shellshock is in that league and with an equally awesome name albeit bereft of a cool logo (someone in the marketing department of these vulns needs to get on that). But in all seriousness, it does have the potential to be a biggie and as I did with Heartbleed, I wanted to put together something definitive both for me to get to grips with the situation and for others to dissect the hype from the true underlying risk.
To set the scene, let me share some content from Robert Graham’s blog post who has been doing some excellent analysis on this. Imagine an HTTP request like this:
target = 0.0.0.0/0
port = 80
banners = true
http-user-agent = shellshock-scan (http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html)
http-header = Cookie:() { :; }; ping -c 3 209.126.230.74
http-header = Host:() { :; }; ping -c 3 209.126.230.74
http-header = Referer:() { :; }; ping -c 3 209.126.230.74
Which, when issued against a range of vulnerable IP addresses, results in this:
Ping requests from vulnerable Bash hosts
Put succinctly, Robert has just orchestrated a bunch of external machines to ping him simply by issuing a carefully crafted request over the web. What’s really worrying is that he has effectively caused these machines to issue an arbitrary command (albeit a rather benign ping) and that opens up a whole world of very serious possibilities. Let me explain.

What is Bash and why do we need it?

Skip this if it’s old news, but context is important for those unfamiliar with Bash so let’s establish a baseline understanding. Bash is a *nix shell or in other words, an interpreter that allows you to orchestrate commands on Unix and Linux systems, typically by connecting over SSH or Telnet. It can also operate as a parser for CGI scripts on a web server such as we’d typically see running on Apache. It’s been around since the late 80s where it evolved from earlier shell implementations (the name is derived from the Bourne shell) and is enormously popular. There are other shells out there for Unix variants, the thing about Bash though is that it’s the default shell for Linux and Mac OS X which are obviously extremely prevalent operating systems. That’s a major factor in why this risk is so significant – the ubiquity of Bash – and it’s being described as “one of the most installed utilities on any Linux system”.
You can get a sense of the Bash footprint when you look at the latest Netcraft web server stats:
Netcraft showing half the internet's systems running Apache
When half the net is running Apache (which is typically found on Linux), that’s a significant size of a very, very large pie. That same Netcraft article is reporting that we’ve just passed the one billion websites mark too and whilst a heap of those are sharing the same hosts, that’s still a whole lot of Bash installations. Oh – that’s just web servers too, don’t forget there are a heap of other servers running Linux and we’ll come back to other devices with Bash a bit later too.
Bash can be used for a whole range of typical administrative functions, everything from configuring websites through to controlling embedded software on a device like a webcam. Naturally this is not functionality that’s intended to be open to the world and in theory, we’re talking about authenticated users executing commands they’ve been authorised to run. In theory.

What’s the bug?

Let me start with the CVE from NIST vulnerability database because it gives a good sense of the severity (highlight mine):
GNU Bash through 4.3 processes trailing strings after function definitions in the values of environment variables, which allows remote attackers to execute arbitrary code via a crafted environment, as demonstrated by vectors involving the ForceCommand feature in OpenSSH sshd, the mod_cgi and mod_cgid modules in the Apache HTTP Server, scripts executed by unspecified DHCP clients, and other situations in which setting the environment occurs across a privilege boundary from Bash execution.
They go on to rate it a “10 out of 10” for severity or in other words, as bad as it gets. This is compounded by the fact that it’s easy to execute the attack (access complexity is low) and perhaps most significantly, there is no authentication required when exploiting Bash via CGI scripts. The summary above is a little convoluted though so let’s boil it down to the mechanics of the bug.
The risk centres around the ability to arbitrarily define environment variables within a Bash shell which specify a function definition. The trouble begins when Bash continues to process shell commands after the function definition resulting in what we’d classify as a “code injection attack”. Let’s look at Robert’s example again and we’ll just take this line:
http-header = Cookie:() { :; }; ping -c 3 209.126.230.74
The function definition is () { :; }; and the shell command is the ping statement and subsequent parameters. When this is processed within the context of a Bash shell, the arbitrary command is executed. In a web context, this would mean via a mechanism such as a CGI script and not necessarily as a request header either. It’s worth having a read through the seclists.org advisory where they go into more detail, including stating that the path and query string could be potential vectors for the attack.
Of course one means of mitigating this particular attack vector is simply to disable any CGI functionality that makes calls to a shell and indeed some are recommending this. In many cases though, that’s going to be a seriously breaking change and at the very least, one that going to require some extensive testing to ensure it doesn’t cause immediate problems in the website which in many cases, it will.
The HTTP proof above is a simple but effective one, albeit just one implementation over a common protocol. Once you start throwing in Telnet and SSH and apparently even DHCP, the scope increases dramatically so by no means are we just talking about exploiting web app servers here. (Apparently the risk is only present in SSH post-auth, but at such an early stage of the public disclosure we’ll inevitably see other attack vectors emerge yet.)
What you also need to remember is that the scope of potential damage stretches well beyond pinging an arbitrary address as in Robert’s example, that’s simply a neat little proof that he could orchestrate a machine to issue a shell command. The question becomes this: What damage could an attacker do when they can execute a shell command of their choosing on any vulnerable machine?

What are the potential ramifications?

The potential is enormous – “getting shell” on a box has always been a major win for an attacker because of the control it offers them over the target environment. Access to internal data, reconfiguration of environments, publication of their own malicious code etc. It’s almost limitless and it’s also readily automatable. There are many, many examples of exploits out there already that could easily be fired off against a large volume of machines.
Unfortunately when it comes to arbitrary code execution in a shell on up to half the websites on the internet, the potential is pretty broad. One of the obvious (and particularly nasty) ones is dumping internal files for public retrieval. Password files and configuration files with credentials are the obvious ones, but could conceivably extend to any other files on the system.
Likewise, the same approach could be applied to write files to the system. This is potentially the easiest website defacement vector we’ve ever seen, not to mention a very easy way of distributing malware
Or how about this: one word I keep seeing a lot is “worm”:
I'm at the Virus Bulletin 2014 Conference, taking bets on when we'll see a worm exploiting the #Shellshock bash bug.
When we talk about worm in a malicious computing context, we’re talking about a self-replicating attack where a malicious actor creates code that is able to propagate across targets. For example, we saw a very effective implementation of this with Samy’s MySpace XSS Worm where some carefully crafted JavaScript managed to “infect” a million victims’ pages in less than a day.
The worry with Shellshock is that an attack of this nature could replicate at an alarming rate, particularly early on while the majority of machines remain at risk. In theory, this could take the form of an infected machine scanning for other targets and propagating the attack to them. This would be by no means limited to public facing machines either; get this behind the corporate firewall and the sky’s the limit.
People are working on exploiting this right now. This is what makes these early days so interesting as the arms race between those scrambling to patch and those scrambling to attack heats up.

Which versions of Bash are affected?

The headlines state everything through 4.3 or in other words, about 25 years’ worth of Bash versions. Given everyone keeps comparing this to Heartbleed, consider that the impacted versions of OpenSSL spanned a mere two years which is a drop in the ocean compared to Shellshock. Yes people upgrade their versions, but no they don’t do it consistently and whichever way you cut it, the breadth of at-risk machines is going to be significantly higher with Shellshock than what it was with Heartbleed.
But the risk may well extend beyond 4.3 as well. Already we’re seeing reports of patches not being entirely effective and given the speed with which they’re being rolled out, that’s not all that surprising. This is the sort of thing those impacted by it want to keep a very close eye on, not just “patch and forget”.

When did we first learn of it and how long have we been at risk?

The first mention I’ve found on the public airwaves was this very brief summary on seclists.org which works out at about 14:00 GMT on Wednesday (about midnight this morning for those of us on the eastern end of Australia). The detail came in the advisory I mentioned earlier an hour later so getting towards mid-afternoon Wednesday in Europe or morning in the US. It’s still very fresh news with all the usual press speculation and Chicken Little predications; it’s too early to observe any widespread exploitation in the wild, but that could also come very soon if the risk lives up to its potential.
Scroll back beyond just what has been disclosed publicly and the bug was apparently discovered last week by Stéphane Chazelas, a “Unix/Linux, network and telecom specialist” bloke in the UK. Having said that, in Akamai’s post on the bug, they talk about it having been present for “an extended period of time” and of course vulnerable versions of Bash go back two and a half decades now. The question, as with Heartbleed, will be whether or not malicious actors were aware of this before now and indeed whether they were actively exploiting it.

Are our “things” affected?

This is where it gets interesting – we have a lot of “things” potentially running Bash. Of course when I use this term I’m referring to the “Internet of Things” (IoT) which is the increasing prevalence of whacking an IP address and a wireless adaptor into everything from our cutlery to our door locks to our light globes.
Many IoT devices run embedded Linux distributions with Bash. These very same devices have already been shown to demonstrate serious security vulnerabilities in other areas, for example LIFX light globes just a couple of months ago were found to be leaking wifi credentials. Whilst not a Bash vulnerability like Shellshock, it shows us that by connecting our things we’re entering a whole new world of vulnerabilities in places that were never at risk before.
This brings with it many new challenges; for example, who is actively thinking they should regularly patch their light bulbs? Also consider the longevity of the devices this software is appearing in and whether they’re actually actively maintained. In a case like the vulnerable Trendnet cameras from a couple of years ago, there are undoubtedly a huge number of them still sitting on the web because in terms of patching, they’re pretty much a “set and forget” proposition. In fact in that case there’s an entire Twitter account dedicated to broadcasting the images it has captured of unsuspecting owners of vulnerable versions. It’s a big problem with no easy fixes and its going to stick with us for a very long time.
But Bash shells are also present in many more common devices, for example our home routers which are generally internet-facing. Remember when you last patched the firmware on your router? Ok, if you’re reading this then maybe you’re the type of technical person who actually does patch their router, but put yourself in the shoes of Average Joe Consumer and ask yourself that again. Exactly.

All our things are on the Microsoft stack, are we at risk?

Short answer “no”, long answer “yes”. I’ll tackle the easy one first – Bash is not found natively on Windows and whilst there are Bash implementations for Windows, it’s certainly not common and it’s not going to be found on consumer PCs. It’s also not clear if products like win-bash are actually vulnerable to Shellshock in the first place.
The longer answer is that just because you operate in a predominantly Microsoft-centric environment doesn’t mean that you don’t have Bash running on machines servicing other discrete purposes within that environment. When I wrote about Heartbleed, I referenced Nick Craver’s post on moving Stack Overflow towards SSL and referred to this diagram of their infrastructure:
Stackoverflow server farm topology
There are non-Microsoft components sitting in front of their Microsoft application stack, components that the traffic needs to pass through before it hits the web servers. These are also components that may have elevated privileges behind the firewall – what’s the impact if Shellshock is exploited on those? It could be significant and that’s the point I’m making here; Shellshock has the potential to impact assets beyond just at-risk Bash implementations when it exists in a broader ecosystem of other machines.

I’m a system admin – what can I do?

Firstly, discovering if you’re at risk is trivial as it’s such an easily reproducible risk. There’s a very simple test The Register suggests which is just running this command within your shell:
env X="() { :;} ; echo busted" /bin/sh -c "echo stuff"
You get “busted” echo’d back out and you’ve successfully exploited the bug.
Of course the priority here is going to be patching at risk systems and the patch essentially boils down to ensuring no code can be executed after the end of a Bash function. Linux distros such as Red Hat are releasing guidance on patching the risk so jump on that as a matter of priority.
We’ll inevitably also see definitions for intrusion detection systems too and certainly there will be common patterns to look for here. That may well prove a good immediate term implementation for many organisations, particularly where there may be onerous testing requirements before rolling out patches to at-risk systems. Qualys’ are aiming to have a definition to detect the attack pretty quickly and inevitably other IDS providers are working on this around the clock as well.
Other more drastic options include replacing Bash with an alternate shell implementation or cordoning off at-risk systems, both of which could have far-reaching ramifications and are unlikely to be decisions taken lightly. But that’s probably going to be the nature of this bug for many people – hard decisions that could have tangible business impact in order to avoid potentially much more significant ramifications.
The other issue which will now start to come up a lot is the question of whether Shellshock has already been exploited in an environment. This can be hard to determine if there’s no logging of the attack vectors (there often won’t be if it’s passed by HTTP request header or POST body), but it’s more likely to be caught than with Heartbleed when short of full on pcaps, the heartbeat payloads would not normally have been logged anywhere. But still, the most common response to “were we attacked via Shellshock” is going to be this:
unfortunately, this isn't "No, we have evidence that there were no compromises;" rather, "we don't have evidence that spans the lifetime of this vulnerability."  We doubt many people do - and this leaves system owners in the uncomfortable position of not knowing what, if any, compromises might have happened.
Let the speculation about whether the NSA was in on this begin…

I’m a consumer – what can I do?

It depends. Shellshock affects Macs so if you’re running OS X, at this stage that appears to be at risk which on the one hand is bad due to the prevalence of OS X but on the other hand will be easily (and hopefully quickly) remediated due to a pretty well-proven update mechanism (i.e. Apple can remotely push updates to the machine).
If you’re on a Mac, the risk is easily tested for as described in this Stack Exchange answer:
Testing for Shellshock on OS X
It’s an easy test, although I doubt the average Mac user is going to feel comfortable stepping through the suggested fix which involves recompiling Bash.
The bigger worry is the devices with no easy patching path, for example your router. Short of checking in with the manufacturer’s website for updated firmware, this is going to be a really hard nut to crack. Often routers provided by ISPs are locked down so that consumers aren’t randomly changing either config or firmware and there’s not always a remote upgrade path they can trigger either. Combine that with the massive array of devices and ages that are out there and this could be particularly tricky. Of course it’s also not the sort of thing your average consumer is going to be comfortable doing themselves either.
In short, the advice to consumers is this: watch for security updates, particularly on OS X. Also keep an eye on any advice you may get from your ISP or other providers of devices you have that run embedded software. Do be cautious of emails requesting information or instructing you to run software – events like this are often followed by phishing attacks that capitalise on consumers’ fears. Hoaxes presently have people putting their iPhones in the microwave so don’t for a moment think that they won’t run a random piece of software sent to them via email as a “fix” for Shellshock!

Summary

In all likelihood, we haven’t even begun the fathom the breadth of this vulnerability. Of course there are a lot of comparisons being made to Heartbleed and there are a number of things we learned from that exercise. One is that it took a bit of time to sink in as we realised the extent to which we were dependent on OpenSSL. The other is that it had a very long tail – months after it hit there were still hundreds of thousands of known hosts left vulnerable.
But in one way, the Heartbleed comparison isn’t fair – this is potentially far worse. Heartbleed allowed remote access to small amount of data in the memory of affected machines. Shellshock is enabling remote code injection of arbitrary commands pre-auth which is potentially far more dire. In that regard, I have to agree with Robert:
This 'bash' bug is probably a bigger deal than Heartbleed, btw.
It’s very, very early days yet – only half a day since it first hit the airwaves at the time of writing – and I sus