Friday, October 28, 2011

I want a magical operator to assuage my async woes (and a pony)

Lately, I have spent a lot of time thinking about how I could reduce the tedium of async programming in JavaScript. For example, consider a typical implementation of using an XMLHttpRequest to do a GET that returns a Deferred (this example uses jQuery's implementation of Deferred, but there are many other reasonable implementations, and there is a great need to settle on a standard API, but that is a subject for another post):
/** @return {Deferred} */
var simpleGet = function(url) {
  var deferred = new $.Deferred();

  var xhr = new XMLHttpRequest();
  xhr.onreadystatechange = function() {
    if (xhr.readyState == 4) {
      if (xhr.status == 200) {
      } else {
  };'GET', url, true /* async */);

  return deferred;
What I want is a magical ~ operator that requires (and understands) an object that implements a well-defined Deferred contract so I can write my code in a linear fashion:
/** @return {Deferred} */
var getTitle = function(url) {
  if (url.substring(0, 7) != 'http://') url = 'http://' + url;
  var html = ~simpleGet(url);
  var title = html.match(/<title>(.*)<\/title>/)[1];
  return title;

/** Completes asynchronously, but does not return a value. */
var logTitle = function(url) {
  try {
    var title = ~getTitle(url);
  } catch (e) {
    console.log('Could not extract title from ' + url);
Unfortunately, to get this type of behavior today, I have to write something like the following (and even then, I am not sure whether the error handling is quite right):
/** @return {Deferred} */
var getTitle = function(url) {
  if (url.substring(0, 7) != 'http://') url = 'http://' + url;

  var deferred = new $.Deferred();
  simpleGet(url).then(function(html) {
    var title = html.match(/<title>(.*)<\/title>/)[1];
  }, function(error) {

  return deferred;

/** Completes asynchronously, but does not return a value. */
var logTitle = function(url) {
  var deferred = getTitle(url);
  deferred.then(function(title) {
  }, function(error) {
    console.log('Could not extract title from ' + url);
I am curious how difficult it would be to programmatically translate the first into the second. I spent some time playing with generators in Firefox, but I could not seem to figure out how to emulate my desired behavior. I also spent some time looking at the ECMAScript wiki, but it is unclear whether they are talking about exactly the same thing.

In terms of modern alternatives, it appears that C#'s await and async keywords are the closest thing to what I want right now. Unfortunately, I want to end up with succinct JavaScript that runs in the browser, so I'm hoping that either CoffeeScript or Dart will solve this problem, unless the ECMAScript committee gets to it first!

Please feel free to add pointers to related resources in the comments. There is a lot out there to read these days (the Dart mailing list alone is fairly overwhelming), so there's a good chance that there is something important that I have missed.

Update (Fri Oct 28, 6:15pm): I might be able to achieve what I want using deferred functions in Traceur. Apparently I should have been looking at the deferred functions strawman proposal more closely: I skimmed it and assumed it was only about defining a Deferred API.

Want to learn about a suite of tools to help manage a large JavaScript codebase? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps!

Tuesday, August 2, 2011

An Examination of goog.base()

A few weeks ago, I started working on adding an option to CoffeeScript to spit out Closure-Compiler-friendly JavaScript. In the process, I discovered that calls to a superclass constructor in CoffeeScript look slightly different than they do in the Closure Library. For example, if you have a class Foo and a subclass Bar, then in CoffeeScript, the call [in the generated JavaScript] to invoke Foo's constructor from Bar's looks like:, a, b);
whereas in the Closure Library, the canonical thing is to do the following, identifying the superclass function directly:, a, b);
The two are functionally equivalent, though CoffeeScript's turns out to be slightly simpler to use as a developer because it does not require the author to know the name of the superclass when writing the line of code. In the case of CoffeeScript, where JavaScript code generation is being done, this localization of information makes the translation of CoffeeScript to JavaScript easier to implement.

The only minor drawback to using the CoffeeScript form when using Closure (though note that you would have to use superClass_ instead of __super__) is that the CoffeeScript call is more bytes of code. Unfortunately, the Closure Compiler does not know that Bar.superClass_.constructor is equivalent to Foo, so it does not rewrite it as such, though such logic could be added to the Compiler.

This piqued my curiosity about how goog.base() is handled by the Compiler, so I ended up taking a much deeper look at goog.base() than I ever had before. I got so caught up in it that I ended up composing a new essay on what I learned: "An Examination of goog.base()."

The upshot of all this is that in my CoffeeScript-to-Closure translation code, I am not going to translate any of CoffeeScript's super() calls into goog.base() calls because avoiding goog.base() eliminates a couple of issues. I will still use goog.base() when writing Closure code by hand, but if Closure code is being autogenerated anyway, then using goog.base() is not as compelling.

Finally, if you're wondering why I started this project a few weeks ago and have not made any progress on the code since then, it is because I got married and went on a honeymoon, so at least my wife and I would consider that a pretty good excuse!

Want to learn more about Closure? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps!

Friday, July 1, 2011

Writing useful JavaScript applications in less than half the size of jQuery

Not too long ago, I tried to bring attention to how little of the jQuery library many developers actually use and argue that frontend developers should consider what sort of improvements their users would see if they could compile their code with the Advanced mode of the Closure Compiler. Today I would like to further that argument by taking a look at TargetAlert, my browser extension that I re-released this week for Chrome.

TargetAlert is built using Closure and plovr using the template I created for developing a Chrome extension. The packaged version of the extension includes three JavaScript files:

Name Size (bytes) Description
targetalert.js 19475 content script that runs on every page
options.js 19569 logic for the TargetAlert options page
targetalert.js 3590 background page that channels information from the options to the content script
Total 42634  

By comparison, the minified version of jQuery 1.6 is 91342 bytes, which is more than twice the size of the code for TargetAlert. (The gzipped sizes are 14488 vs. 31953, so the relative sizes are the same, even when gzipped.)

And to put things in perspective, here is the set of goog.require() statements that appear in TargetAlert code, which reflects the extent of its dependencies:
I include this to demonstrate that there was no effort to re-implement parts of the Closure Library in order to save bytes. On the contrary, one of the primary reasons to use Closure is that you can write code in a natural, more readable way (which may be slightly more verbose), and make the Compiler responsible for minification. Although competitions like JS1K are fun and I'm amazed to see how small JS can get when it is hand-optimized, even the first winner of JS1K, Marijn Haverbeke, admits, "In terms of productivity, this is an awful way of coding."

When deploying a packaged browser extension, there are no caching benefits to consider: if your extension includes a copy of jQuery, then it adds an extra 30K to the user's download, even when gzipped. To avoid this, your extension could reference (or equivalent) from its code, but then it may not work when offline. Bear in mind that in some parts of the world (including the US! think about data plans for tablets), users have quotas for how much data they download, so you're helping them save a little money if you can package your resources more efficiently.

Further, if your browser extension has a content script that runs on every page, keeping your JS small reduces the amount of code that will be executed on every page load, minimizing the impact of your extension on the user's browsing experience. As users may have many extensions installed, if everyone starts including an extra 30K of JS, then this additional tax can really start to add up! Maybe it's time you gave Closure a good look, if you haven't already.

Want to learn more about Closure? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps!

Monday, June 27, 2011

The Triumphant Return of TargetAlert!

About seven years ago, my adviser and I were sitting in his office Googling things as part of research for my thesis. I can't remember what we were looking for, but just after we clicked on a promising search result, the Adobe splash screen popped up. As if on cue, we both let out a groan in unison as we waited for the PDF plugin to load. In that instant, it struck me that I could build a small Firefox extension to make browsing the Web just a little bit better.

Shortly thereafter, I created TargetAlert: a browser extension that would warn you when you were about to click on a PDF. It used the simple heuristic of checking whether the link ended in pdf, and if so, it inserted a PDF icon at the end of the link as shown on the original TargetAlert home page.

And that was it! My problem was solved. Now I was able to avoid inadvertently starting up Adobe Reader as I browsed the Web.

But then I realized that there were other things on the Web that were irritating, too! Specifically, links that opened in new tabs without warning or those that started up Microsoft Office. Within a week, I added alerts for those types of links, as well.

After adding those features, I should have been content with TargetAlert as it was and put it aside to focus on my thesis, but then something incredible happened: I was Slashdotted! Suddenly, I had a lot more traffic to my site and many more users of TargetAlert, and I did not want to disappoint them, so I added a few more features and updated the web site. Bug reports came in (which I recorded), but it was my last year at MIT, and I was busy interviewing and TAing on top of my coursework and research, so updates to TargetAlert were sporadic after that. It wasn't until the summer between graduation and starting at Google that I had time to dig into TargetAlert again.

Though the primary reason that TargetAlert development slowed is that Firefox extension development should have been fun, but it wasn't. At the time, every time you made a change to your extension, you had to restart Firefox to pick up the change. As you can imagine, that made for a slow edit-reload-test cycle, inhibiting progress. Also, instead of using simple web technologies like HTML and JSON, Firefox encouraged the use of more obscure things, such as XUL and RDF. The bulk of my energy was spent on getting information into and out of TargetAlert's preferences dialog (because I actually tried to use XUL and RDF, as recommended by Mozilla), whereas the fun part of the extension was taking the user's preferences and applying them to the page.

The #1 requested feature for TargetAlert was for users to be able to define their own alerts (as it were, users could only enable or disable the alerts that were built into TargetAlert). Conceptually, this was not a difficult problem, but realizing the solution in XUL and RDF was an incredible pain. As TargetAlert didn't generate any revenue and I had other personal projects (and work projects!) that were more interesting to me, I never got around to satisfying this feature request.

Fast-forward to 2011 when I finally decommissioned a VPS that I had been paying for since 2003. Even though I had rerouted all of its traffic to a new machine years ago and it was costing me money to keep it around, I put off taking it down because I knew that I needed to block out some time to get all of the important data off of it first, which included the original CVS repository for TargetAlert.

As part of the data migration, I converted all of my CVS repositories to SVN and then to Hg, preserving all of the version history (it should have been possible to convert from CVS to Hg directly, but I couldn't get hg convert to work with CVS). Once I had all of my code from MIT in a modern version control system, I started poking around to see which projects would still build and run. It turns out that I have been a stickler for creating build.xml files for personal projects for quite some time, so I was able to compile more code than I would have expected!

But then I took a look at TargetAlert. The JavaScript that I wrote in 2004 and 2005 looks gross compared to the way I write JavaScript now. It's not even that it was totally disorganized -- it's just that I had been trying to figure out what the best practices were for Firefox/JavaScript development at the time, and they just didn't exist yet.

Further, TargetAlert worked on pre-Firefox 1.0 releases through Firefox 2.0, so the code is full of hacks to make it work on those old versions of the browser that are now irrelevant. Oh, and what about XUL? Well, my go-to resource for XUL back in the day was, though the site owners have decided to shut it down, which made making sense of that old code even more discouraging. Once again, digging into Firefox extension development to get TargetAlert to work on Firefox 4.0 did not appear to be much fun.

Recently, I have been much more interested in building Chrome apps and extensions (Chrome is my primary browser, and unlike most people, I sincerely enjoy using a Cr-48), so I decided to port TargetAlert to Chrome. This turned out to be a fun project, especially because it forced me to touch a number of features of the Chrome API, so I ended up reading almost all of the documentation to get a complete view of what the API has to offer (hooray learning!).

Compared to Firefox, the API for Chrome extension development seems much better designed and documented. Though to be fair, I don't believe that Chrome's API would be this good if it weren't able to leverage so many of the lessons learned from years of Firefox extension development. For example, Greasemonkey saw considerable success as a Firefox extension, which made it obvious that Chrome should make content scripts an explicit part of its API. (It doesn't hurt that the creator of Greasemonkey, Aaron Boodman, works on Chrome.) Also, where Firefox uses a wacky, custom manifest file format for one metadata file and an ugly ass RDF format for another metadata file, Chrome uses a single JSON file, which is a format that all web developers understand. (Though admittedly, having recently spent a bit of time with manifest.json files for Chrome, I feel that the need for my suggested improvements to JSON is even more compelling.)

As TargetAlert was not the first Chrome extension I had developed, I already had some idea of how I would structure my new extension. I knew that I wanted to use both Closure and plovr for development, which meant that there would be a quick build step so that I could benefit from the static checking of the Closure Compiler. Although changes to Chrome extensions do not require a restart to pick up any changes, they do often require navigating to chrome://extensions and clicking the Reload button for your extension. I decided that I wanted to eliminate that step, so I created a template for a Chrome extension that uses plovr in order to reduce the length of the edit-reload-test cycle. This enabled me to make fast progress and finally made extension development fun again! (The README file for the template project has the details on how to use it to get up and running quickly.)

I used the original code for TargetAlert as a guide (it had some workarounds for web page quirks that I wanted to make sure made it to the new version), and within a day, I had a new version of TargetAlert for Chrome! It had the majority of the features of the original TargetAlert (as well as some bug fixes), and I felt like I could finally check "resurrect TargetAlert" off of my list.

Except I couldn't.

A week after the release of my Chrome extension, I only had eight users according to my Chrome developer dashboard. Back in the day, TargetAlert had tens of thousands of users! This made me sad, so I decided that it was finally time to make the Chrome version of TargetAlert better than the original Firefox verison: I was finally going to support user-defined alerts! Once I actually sat myself down to do the work, it was not very difficult at all. Because Chrome extensions have explicit support for an options page in HTML/JS/CSS that has access to localStorage, building a UI that could read and write preferences was a problem that I have solved many times before. Further, being able to inspect and edit the localStorage object from the JavaScript console in Chrome was much more pleasant than mucking with user preferences in about:config in Firefox ever was.

So after years of feature requests, Target Alert 0.6 for Chrome is my gift to you. Please install it and try it out! With the exception of the lack of translations in the Chrome version of TargetAlert (the Firefox version had a dozen), the Chrome version is a significant improvement over the Firefox one: it's faster, supports user-defined alerts, and with the seven years of Web development experience that I've gained since the original, I fixed a number of bugs, too.

Want to learn more about web development and Closure? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps!

Thursday, June 9, 2011

It takes a village to build a Linux desktop

tl;dr Instead of trying to build your own Ubuntu PC, check out a site like instead.

In many ways, this post is more for me than for you—I want to make sure I re-read this the next time I am choosing a desktop computer.

My Windows XP desktop from March 2007 finally died, so it was time for me to put together a new desktop for development. Because Ubuntu on my laptop had been working out so well, I decided that I would make my new machine an Ubuntu box, too. Historically, it was convenient to have a native Windows machine to test IE, Firefox, Chrome, Safari, and Opera, but Cygwin is a far cry from GNOME Terminal, so using Windows as my primary desktop environment had not been working out so well for me.

As a developer, I realize that my computer needs are different from an ordinary person's, but I didn't expect it would be so difficult to buy the type of computer that I wanted on the cheap (<$1000). Specifically, I was looking for:

  • At least 8GB of RAM.
  • A 128GB solid state drive. (I would have been happy with 64GB because this machine is for development, not storing media, but Jeff Atwood convinced me to go for 128GB, anyway.)
  • A video card that can drive two 24" vertical monitors (I still have the two that I used with my XP machine). Ideally, the card would also be able to power a third 24" if I got one at some point.
  • A decent processor and motherboard.
I also wanted to avoid paying for:
  • A Windows license.
  • A CD/DVD-ROM drive.
  • Frivolous things.
I did not think that I would need a CD-ROM drive, as I planned to install Ubuntu from a USB flash drive.

I expected to be able to go to Dell or HP's web site and customize something without much difficulty. Was I ever wrong. At first, I thought it was going to be easy as the first result for [Dell Ubuntu] looked very promising: it showed a tower starting at $650 with Ubuntu preinstalled. I started to customize it: upgrading from 4GB to 8GB of RAM increased the price by $120, which was reasonable (though not quite as good as Amazon). However, I could not find an option to upgrade to an SSD, so buying my own off of Newegg would cost me $240. Finally, the only options Dell offered for video cards were ATI, and I have had some horrible experiences trying to get dual monitors to work with Ubuntu and ATI cards in the past (NVIDIA seems to be better about providing good Linux drivers). At this point, I was over $1000, and was not so sure about the video card, so I started asking some friends for their input.

Unfortunately, I have smart, capable friends who can build their own machines from parts who were able to convince me that I could, too. You see, in general, I hate dealing with hardware. For me, hardware is simply an inevitable requirement for software. When software goes wrong, I have some chance of debugging it and can attack the problem right away. By comparison, when hardware goes wrong, I am less capable, and may have to wait until a new part from Amazon comes in before I can continue debugging the problem, which sucks.

At the same time, I realized that I should probably get past my aversion to dealing with hardware, so I started searching for blog posts about people who had built their own Ubuntu boxes. I found one post by a guy who built his PC for $388.95, which was far more than the Dell that I was looking at! Further, he itemized the parts that he bought, so at least I knew that if I followed his steps, I would end up with something that worked with Ubuntu (ending up with hardware that was not supported by Ubuntu was one of my biggest fears during this project). I cross-checked this list with a friend who had recently put together a Linux machine with an Intel i7 chip, and he was really happy with it, so I ended up buying it and the DX58SO motherboard that was recommended for use with the i7. This made things a bit pricier than they were in the blog post I originally looked at:

Motherboard Intel DX58SO Extreme Series X58 ATX Triple-channel DDR3 16GB SLI
CPU Intel Core i7 950 3.06GHz 8M L3 Cache LGA1366 Desktop Processor
RAM Corsair XMS3 4 GB 1333MHz PC3-10666 240-pin DDR3 Memory Kit for Intel Core i3 i5 i7 and AMD CMX4GX3M1A1333C9
(2x$39.99) $79.98
Case Cooler Master Elite 360 RC-360-KKN1-GP ATX Mid Tower/Desktop Case (Black)
Hard Drive Crucial Technology 128 GB Crucial RealSSD C300 Series Solid State Drive CTFDDAC128MAG-1G1
Power Supply Corsair CMPSU-750HX 750-Watt HX Professional Series 80 Plus Certified Power Supply compatible with Core i7 and Core i5
Sales Tax $55.47
Total $1027.90

At this point, I should have acknowledged that what I had put together (on paper) was now in the price range of what I was originally looking at on Dell's web site. Unfortunately, I had mentally committed to being a badass and building a machine at this point, so I forged ahead with my purchase.

I also should have acknowledged that this list of parts did not include a video card...

In a few days, everything had arrived and I started putting everything together as best as I could figure out. I tried following the assembly instructions verbatim from the manuals, but that proved to be a huge mistake, as the suggested assembly order was not appropriate for my parts. For example, the case instructions recommended that I install the power supply first, then the motherboard, though that ended up making the SATA connectors inaccessible, so I had to remove the motherboard, then the power supply, plug in the SATA cables, and then put everything back together again. (This is one of many examples of exercises like this that I went through.)

When I was close to having something that I thought would boot, I finally accepted the fact that I had failed to order a video card, so I tried using the one from my XP machine in hopes that it would allow me to kick off the Ubuntu installation process, and then I could walk to Best Buy to purchase a new video card. In the months leading up to the death of my XP machine, I had a lot of problems with my monitors, so it should have been no surprise that my installation screen looked like this:

Regardless, this allowed me to run the memory test (which runs forever, by the way—I let this run for hours before I decided to investigate why it never stopped) while I went off to Best Buy. Because I had already convinced myself that an NVIDIA card would work better with Ubuntu than an ATI, I went ahead and bought this badass looking video card: the EVGA GeForce GTX 550 Ti. It was not cheap ($250 at Best Buy, though it would have been $125 on Amazon), but I was a man on a mission, so nothing could stop me.

Once I got home and dropped the card in, I had a problem that I never would have anticipated: the video card did not fit in my case. Specifically, the card fit in my case, but it was so close to the power supply that there was not enough room to plug in the separate power connector that the video card needed. At that point, I was both desperate and determined, so I took out the power supply and tried to wrench off the casing to create a big enough hole so there would be enough room behind the video card to plug it in. As you can see, I did succeed in removing quite a bit of metal from the power supply (and most definitely voided the warranty):

Despite my handiwork with a pair of pliers, I was unable to remove enough metal from the power supply to create enough space to power the video card, so it would have to go back to Best Buy. I decided to search for [ubuntu 11.04 best video card] to find something that I could overnight from Amazon. I followed the links from this blog post, and decided to go with the ZOTAC nVidia GeForce 9800GT, which was $102.32 after tax and overnight shipping. One of the main selling points for me was the following comment in one of the Amazon reviews: "Another advantage of this card is that it DOES NOT require your power supply to have a video card power connector." Although I was originally hoping to get a card with two DVI inputs (instead of one DVI and one VGA), I really just wanted something that would work at that point.

While I was waiting for the new video card to arrive, I tried installing Linux on my SSD with my half-assed video card. Although it seemed like it was going to install off of the USB flash drive on Day 1, my PC did not seem to want to accept it on Day 2. I spent some time Googling for "@ogt error" that morning (because that is what I saw on my screen), until I realized that it was actually saying "boot error" and my video card was just garbling the characters. I rewrote the USB drive with all sorts of different ISOs, and I started to wonder whether buying the cheapest flash drive at Best Buy (it was $8 for 4GB!) was a mistake. I then tried the USB drive on another Windows machine I had lying around, and it took, at which point I was really stumped. Again, I asked some friends what they thought, and they recommended installing from a CD, as that was much more reliable.

As you may recall, buying a CD-ROM drive was something I had avoided, so what could I do? I tried reusing the one from my old Dell, but that turned out to be a non-starter because it required an IDE connector rather than a SATA one. Instead, I hoofed it back to Best Buy to "borrow" a drive for 24 hours or so. This was one of my better decisions during this entire process, as installing from a CD-R that I burned with an 11.04 ISO worked flawlessly.

Once my video card finally came in and Ubuntu was installed, I finally felt like I was almost done! Of course, I was wrong. Getting two 24" monitors working in portrait mode turned out to be quite a challenge. The first step was to install the NVIDIA drivers for Ubuntu. Originally I downloaded binaries from the web site, but that broke everything. Fortunately a friend helped me figure out how to uninstall the binary driver (sudo ./ --uninstall) and replace it with a proper package (sudo apt-get install nvidia-current). Now things were working fine with one monitor in landscape mode, but the jump to portrait was more challenging.

Initially, I tried to do everything through the NVIDIA GUI on Ubuntu, but it did not present an option to rotate the monitor. I found a blog post that recommended adding Option "Rotate" "CCW" to /etc/X11/xorg.conf, which indeed helped me get the first monitor working in portrait mode. I was able to add the second monitor via the NVIDIA GUI and edited xorg.conf again to rotate it. At this point, everything looked great except that I could not drag windows from one monitor to another. To do that, I had to enable "TwinView" in the NVIDIA GUI, which did enable me to drag windows across screens, except NVIDIA insisted that the cursor flow from the bottom of the left monitor to the top of the right monitor instead of horizontally across. I did many Google searches to try to find a simple solution, but I had no luck. Ultimately, I ended up reading up on xorg.conf until I understood it enough to edit it by hand to get things to work. At long last, everything was working!

The final step was to get everything in the case. This was a little tricky because my power supply came with a ton of cables, and wedging them in such that they did not block any of the three exposed fans was non-trivial. Further, the case did not have a proper slot for an SSD, so I ended up buying a 3.5 to 2 X 2.5-Inch Bay Converter to hold the SSD in place. Unfortunately, the case had weird screw holes such that it was impossible to secure the converter in place, but fortunately the top fan prevents the bay from falling into the rest of the case, so it seems good enough. Considering that I already have a power supply with a gaping hole in it and a mess of cables, this did not seem like my biggest concern.

So what have I learned? Primarily, I learned that I should never do this again, but if I had to, I would be much less afraid to mess with hardware the next time around. Including the cost of the video card and the bay converter, I spent $1138.92 in cash and about two days worth of my time. Most of that two days was spent being angry and frustrated. When I was just about finished with the entire project, I noticed an ad on one of the blog posts I used for help to a site I had not heard of before: Apparently they sell Ubuntu laptops, desktops, and servers, and they have a product line called Wildebeest Performance that I could customize to basically exactly what I said I wanted at the outset of this project. On, a machine with Ubuntu 11.04, an i7 processor, 8GB of RAM, a 120GB SSD, and an NVIDIA card with two inputs (one DVI, one VGA) costs $1117.00, which is less than what I paid when buying the parts individually. Obviously buying the machine directly would have been a huge time savings, and I'm sure the inside of the system76 PC would not be nearly as sloppy as mine. In this, and in many other things, I need to have more patience and do more research before diving into a project. It can save a lot of time and sanity in the long run.

Friday, May 27, 2011

GData: I can't take it anymore

I have been playing with GData since I was on the Google Calendar team back in 2005/2006. My experiences with GData can be summarized by the following graph:

There are many reasons why GData continues to infuriate me:
  • GData is not about data—it is an exercise in XML masturbation. If you look at the content of a GData feed, most of the bytes are dedicated to crap that does not matter due to a blind devotion to the Atom Publishing Protocol. In recent history, GData has become better in providing clean JSON data, but the equivalent XML it sends down is still horrifying by comparison. I understand the motivation to provide an XML wire format, but at least the original Facebook API had the decency to use POX to reduce the clutter. Atom is the reason why, while I was at Google, the first GData JSON API we released had to have a bunch of dollar signs and crap in it, so that you could use it to construct the Atom XML equivalent when making a request to the server. When I wanted to add web content events to Calendar in 2006, most of my energy was spent on debating what the semantically correct Atom representation should be rather than implementing the feature. Hitching GData to the Atom wagon was an exhausting waste of time and energy. Is it really that important to enable users to view their updates to a spreadsheet in Google Reader?
  • REST APIs aren't cool. Streaming APIs are cool. If we want to have a real time Web, then we need streaming APIs. PubSub can be used as a stopgap, but it's not as slick. For data that may change frequently (such as a user's location in Google Latitude), you have no choice but to poll aggressively using a REST API.
  • GData has traditionally given JavaScript developers short shrift. Look at the API support for the GData Java client library or Python client library compared to the JavaScript client library. JavaScript is the lingua franca of the Web: give it the attention it deserves.
  • The notion of Atom forces the idea of "feeds" and "entries." That is fine for something like Google Calendar, but is less appropriate for hierarchical data, such as that stored in Google Tasks. Further, for data that does not naturally split into "entries," such as a Google Doc, the entire document becomes a single entry. Therefore, making a minor change to a Google Doc via GData requires uploading the entire document rather than the diff. This is quite expensive if you want to create your own editor for a Google Doc that has autosave.
  • Perhaps the biggest time sink when getting started with GData is wrapping your head around the authentication protocols. To play around with your data, the first thing you have to do is set up a bunch of crap to get an AuthSub token. Why can't I just fill out a form on and give myself one? Setting up AuthSub is not the compelling piece of the application I want to build—interacting with my data is. Let me play with my data first and build a prototype so I can determine if what I'm trying to build is worth sharing with others and productionizing, and then I'll worry about authentication. Facebook's JavaScript SDK does this exceptionally well. After registering your application, you can include one <script> tag on your page and start using the Facebook API without writing any server code. It's much more fun and makes it easier to focus on the interesting part of your app.
If GData were great, then Google products would be built on top of GData. A quick look under the hood will reveal that no serious web application at Google (Gmail, Calendar, Docs, etc.) uses it. If GData isn't good enough for Google engineers, then why should we be using it?

Monday, May 16, 2011

Reflecting on my Google I/O 2011 Talk

This year was my first trip to Google I/O, both as an attendee and a speaker. The title of my talk was JavaScript Programming in the Large with Closure Tools. As I have spoken to more and more developers, I have come to appreciate how jQuery and Closure are good for different things, and that Closure's true strength is when working in large JavaScript codebases, particularly for SPAs. With the increased interest in HTML5 and offline applications, I believe that JavaScript codebases will continue to grow, and that Closure will become even more important as we move forward, which is why I was eager to deliver my talk at I/O.

Although it does not appear to be linked from the sessions page yet, the video of my I/O talk is available on YouTube. I have also made my slides available online, though I made a concerted effort to put less text on the slides than normal, so they may not make as much sense without the narration from my talk.

I was incredibly nervous, but I did watch all 57 minutes of the video to try to evaluate myself as a speaker. After observing myself, I'm actually quite happy with how things went! I was already aware that I sometimes list back and forth when speaking, and that's still a problem (fortunately, most of the video shows the slides, not me, so you may not be able to tell how bad my nervous habit is). My mumbling isn't as bad as it used to be (historically, I've been pretty bad about it during ordinary conversation, so mumbling during public speaking was far worse). It appears that when I'm diffident about what I'm saying (such as when I'm trying to make a joke that I'm not sure the audience will find funny), I often trail off at the end of the sentence, so I need to work on that. On the plus side, the word "like" appears to drop out of my vernacular when I step on a stage, and I knew my slides well enough that I was able to deliver all of the points I wanted to make without having to stare at them too much. (I never practiced the talk aloud before giving it—I only played through what I wanted to say in my head. I can't take myself seriously when I try to deliver my talk to myself in front of a mirror.)

If you pay attention during the talk, you'll notice that I switch slides using a real Nintendo controller. The week before, I was in Portland for JSConf, which had an 8-bit theme. There, I gave a talk on a novel use of the with keyword in JavaScript, but I never worked the 8-bit theme into my slides, so I decided to do so for Google I/O (you'll also note that Mario and Link make cameos in my I/O slides). Fortunately, I had messed around with my USB NES RetroPort before, so I already had some sample Java code to leverage—I ended up putting the whole NES navigation thing together the morning of my talk.

For my with talk the week before, I had already created my own presentation viewer in Closure/JavaScript so I could leverage things like prettify. In order to provide an API to the NES controller, I exported some JavaScript functions to navigate the presentation forward and backward (navPresoForward() and navPresoBack()). Then I embedded the URL to the presentation in an org.eclipse.swt.browser.Browser and used com.centralnexus.input.Joystick to process the input from the controller and convert right- and left-arrow presses into browser.execute("navPresoForward()") and browser.execute("navPresoBack()") calls in Java. (The one sticking point was discovering that joystick input had to be processed in a special thread scheduled by Display.asyncExec().) Maybe it wasn't as cool as Marcin Wichary's Power Glove during his and Ryan's talk, The Secrets of Google Pac-Man: A Game Show, but I thought theirs was the best talk of the entire conference, so they were tough to compete with.

Want to learn more about Closure? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps!

Wednesday, April 20, 2011 uses only 34% of jQuery

I created a Firefox extension, JsBloat, to help determine what fraction of the jQuery library a web page uses. It leverages JSCoverage to instrument jQuery and keep track of which lines of the library are executed (and how often). The table below is a small sample of sites that I tested with JsBloat. Here, "percentage used" means the fraction of lines of code that were executed in the version of the jQuery library that was loaded:

URL jQuery version % used on page load % used after mousing around 1.4.2 18% 34% 1.4.4 23% 23% 1.5.1 30% 33% 1.4.4 19% 23%

Note that three of the four sites exercise new code paths as a result of mousing around the page, so they do not appear to be using pure CSS for their hover effects. For example, on, a single mouseover of the "Lightweight Footprint" text causes a mouseover animation that increases the percentage of jQuery used by 11%! Also, when jQuery is loaded initially on, it calls $() 61 times, but after mousing around quite a bunch (which only increases the percentage of code used to 33%), the number of times that $() is executed jumps to 9875! (Your results may vary, depending on how many elements you mouse over, but it took less than twenty seconds of mousing for me to achieve my result. See the Postscript below to learn how to run JsBloat on any jQuery-powered page.) Although code coverage is admittedly a coarse metric for this sort of experiment, I still believe that the results are compelling.

I decided to run this test because I was curious about how much jQuery users would stand to gain if they could leverage the Advanced mode of the Closure Compiler to compile their code. JavaScript that is written for Advanced mode (such as the Closure Library) can be compiled so that it is far smaller than its original source because the Closure Compiler will remove code that it determines is unreachable. Therefore, it will only include the lines of JavaScript code that you will actually use, whereas most clients of jQuery appear to be including much more than that.

From these preliminary results, I believe that most sites that use jQuery could considerably reduce the amount of JavaScript that they serve by using Closure. As always, compiling custom JavaScript for every page must be weighed against caching benefits, though I suspect that the Compiler could find a healthy subset of jQuery that is universal to all pages on a particular site.

If you're interested in learning more about using Closure to do magical things to your JavaScript, come find me at Track B of JSConf where I'm going to provide some mind-blowing examples of how to use the with keyword effectively! And if I don't see you at JSConf, then hopefully I'll see you at Google I/O where I'll be talking about JavaScript Programming in the Large with Closure Tools.

Postscript: If you're curious how JsBloat works...

JsBloat works by intercepting requests to for jQuery and injecting its own instrumented version of jQuery into the response. The instrumented code looks something like:
for (var i = 0, l = insert.length; (i < l); (i++)) {
  var elems = ((i > 0)? this.clone(true): this).get();
  ret = ret.concat(elems);
return this.pushStack(ret, name, insert.selector);
so that after each line of code is executed, the _$jscoverage global increments its count for the (file, line number) pair. JSCoverage provides a simple HTML interface for inspecting this data, which JsBloat exposes.

You can tell when JsBloat has intercepted a request because it injects a button into the upper-left-hand corner of the page, as shown below:

Clicking on that button toggles the JSCoverage UI:

Once JsBloat is installed, you can also configure it to intercept requests to any URL with jQuery. For example, loads jQuery from, so you can set a preference in about:config to serve the instrumented version of jQuery 1.4.4 when loads jQuery from its CDN. More information for JsBloat users is available on the project page.

If you are interested in extending JsBloat to perform your own experiments, the source is available on Google Code under a GPL v2 license. (I don't ordinarily make my work available under the GPL, but because JSCoverage is available under the GPL, JsBloat must also be GPL'd.) For example, if you download JSCoverage and the source code for JsBloat, you can use JSCoverage to instrument your own JavaScript files and then include them in a custom build of JsBloat. Hopefully this will help you identify opportunities to trim down the JavaScript that you send to your users.

Want to learn more about Closure? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps!

Wednesday, April 6, 2011

Suggested Improvements to JSON

Today I'm publishing an essay about my suggested improvements to JSON. I really like JSON a lot, but I think that it could use a few tweaks to make it easier to use for the common man.

One question some have asked is: even if these changes to JSON were accepted, how would you transition existing systems? I would lean towards what I call the Nike approach, which is: "just do it." That is, start updating the JSON.parse() method in the browser to accept these extensions to JSON. What I am proposing is a strict superset of what's in RFC 4627 today (which you may recall claims that "[a] JSON parser MAY accept non-JSON forms or extensions"), so it would not break anyone who continued to use ES3 style JSON.

At first, you may think that sounds ridiculous -- how could we update the spec without versioning? Well, if you haven't been paying attention to the HTML5 movement, we are already doing this sort of thing all the time. Browser behavior continues to change/improve, and you just have to roll with the punches. It's not ideal, but yet the Web moves forward, and it's faster than waiting for standards bodies to agree on something.

Though if the Nike approach is too heretical, then I think that versioning is also a reasonable option. In the browser, a read-only JSON.version property could be added, though I don't imagine most developers would check it an runtime, anyway. Like most things on the web, a least-common-denominator approach would be used by those who want to be safe, which would ignore the version number. Only those who do user-agent sniffing on the server would be able to serve a slimmer JavaScript library, though that is already true for many other browser features today. I trust Browserscope and much more than any sort of formal specification, anyway.

Special thanks to Kushal, Dolapo, and Mihai for providing feedback on my essay.

Monday, April 4, 2011

What I learned from üjs

For April Fool's, I released a mock JavaScript library, üjs (pronounced "umlaut JS"). The library was "mock" in the "mockumentary" Spin̈al Tap sense, not the unit testing sense. It took me about a day to create the site, and I learned some interesting lessons along the way, which I thought I would share:
  • In purchasing the domain name ü, I learned about Punycode. Basically, when you type ü into the browser, the browser will translate it to the Punycode equivalent, which is On one hand, this is what prevents you from buying göö and creating a giant phishing scheme; on the other hand, because most users are unfamiliar with Punycode, going to a legitimate domain like ü looks like a giant phishing scheme due to the rewrite.
  • I only promoted üjs through three channels: Twitter, Hacker News, and Reddit. On Twitter, I discovered that using a Punycode domain as your punchline really limits its reach because instead of tweeting about ü, many tweeted about (because that's what you can copy from the location bar after you visit the site), or (somewhat ironically) a URL-shortened version of the domain. I suspect more people would have followed the shared link if they could see the original domain name.
  • According to Google Analytics, just over half of my traffic (50.04%) on April 1 was from Hacker News (where I made it to the front page!). Another 9.89% was from Reddit. Analytics also claims that only 1.57% of my traffic came from Twitter, though 29.76% of my traffic was "direct," so I assume that was from Twitter, as well. On April 1, I had 3690 visits, and then another 443 on April 2 (presumably from the eastern hemisphere who woke up on Saturday to an aftermath of Internet April Fools' pranks).
  • GoDaddy was not able to do a domain search for ü, presumably due to the non-ASCII characters. It turned out that was able to, so I ended up going with them. (Presumably if I understood Punycode well enough at the outset of this project, I could have registered through their site.)
  • I found a discount code for, so my total cost out of pocket for this project was $8.75 (my hosting fees are a sunk cost).
  • Not that it was a substantial investment, but I was hopeful/curious whether I could break even in some way. I felt that traditional ads would have been a little over the top, so instead I decided to include two Amazon Associates links. Although I produced 350 clicks to Amazon (!), I failed to generate any conversions, so I did not make any money off of my ads.
  • Chartbeat is really, really cool. It made it much more fun to watch all of the web traffic to ü during the day. (I wish that I generally had enough traffic to make it worth using Chartbeat all the time!) I believe that I had 144 simultaneous visitors during the peak of üjs, and I was amazed at how dispersed the traffic was from across the globe.
  • One thing that I did not realize is that Chartbeat does not do aggregate statistics. Fortunately, I set up Google Analytics in addition to Chartbeat, so I had access to both types of data.
  • Response times were about 1s on average during peak traffic times. At first, I thought that was horrendously slow, but then I realized that there were a large number of requests coming from outside the US, which increased the average. Most of the requests from the US loaded in the low hundreds of milliseconds, which made me feel good about my hosting choice (who really is excellent, btw).
  • The in Spin̈al Tap is not a valid Unicode character. Instead, it is displayed by printing an n followed by an umlaut, and what I can only assume is the kerning makes the umlaut display over the previous character. (The umlaut does not display correctly on my Chrome 10 on Windows, but it's fine on Linux.) Other characters, such as ü are valid Unicode and can be displayed with a single code point. Wikipedia has a list of letters that "come with umlauts," so I used those characters whenever possible on ü, but for others, I had to use the "letter followed by an umlaut" trick.
  • var n\u0308; is a valid JavaScript variable declaration, but var \u00fc; is not.
  • Initially, the GitHub badge on my page did not link to anything, as I just wanted to satirize the trend I've been seeing in open source lately. Though after a request from a coworker, I imported all of the files I used to create üjs into GitHub. (Incidentally, when I tried to name the GitHub project üjs, they replaced the ü with a hyphen, so I renamed it umlautjs).
In the end, even though I did not make my $8.75 back, I had a really great time with this project. Although it's clear that some people on the Web don't enjoy April Fools, I think it is a nice opportunity to see some good satire. (My personal favorite was Angry Nerds by Atlassian.)

Further, satire is not completely frivolous: it is (an arguably passive-aggressive) way of making a point. In üjs, mine was this: these tiny JavaScript libraries do not help move the needle when it comes to JavaScript development from the community standpoint. Instead of contributing a tiny library, why not focus on contributing a tiny fix to a big library? Think about how many more people you will affect with a bug fix to jQuery than you will by publishing a single JavaScript file with your favorite four helper functions? Or if you are going to create a new library, make sure that you are doing something substantially different than what else is out there. For example, Closure and jQuery are based on different design principles. Both have their use cases, and they serve very different classes of development, so it makes sense for those separate projects to exist and grow.

If you have been following my blog, you probably know that I'm really big on "JavaScript programming in the large," which will be the subject of my talk at Google I/O this year. I hope to see you there!

Want to learn more about Closure? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps!

Friday, April 1, 2011

Awesome New JavaScript Library!

Despite months of advocating Closure, I've finally given up and found a superior JavaScript library (and it's not jQuery!): http://ü

Tuesday, January 18, 2011

What are the most important classes for high school students to succeed in software engineering?

What are the most important classes for high school students to succeed in software engineering? That is the question that I try to answer in an essay of same name.

Also, this is the first essay I have written using NJEdit, which is the editing software that I built (and have since open sourced) in order to write Closure: The Definitive Guide. It helps me focus more on content while worrying less about formatting, though it still has a ways to go before becoming my "one click" publishing solution.

A unique feature of NJEdit is that when I produce the HTML to publish my essay, I also produce the DocBook XML version as a by-product! It's not a big selling point today, but if I ever want to publish anything to print again, I'll be ready! For open-source projects that are slowly creating HTML documentation that they hope to publish as a print book one day, NJEdit might be the solution.

And if it is, maybe someone will help me fix its bugs...

Want to learn more about Closure? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps!

Wednesday, January 12, 2011

My latest Closure talks

Last Thursday, January 6, I had a doubleheader, as I gave one talk in the afternoon at an undergraduate MIT Web programming competition (6.470), and another talk in the evening at the Boston JavaScript Meetup. It was a long day (I took the train up from NYC that morning), but I got to talk to so many interesting people that I was far more energized at the end of the day than exhausted!

I cleaned up the slide decks from both talks so they are suitable for sharing. Because I was informed that the MIT programming competition had a lot of freshmen in it, I didn't go into as much detail about Closure as I normally would have because I didn't want to overwhelm them. (I ended up taking so much out of my "standard" talk that I ended up finishing ~30 minutes early, though that was likely a welcome relief to the students as I was the last speaker in a full day of lecture.) As you can see, the MIT talk is only 17 slides:

The talk I prepared for the Boston JS Meetup was the most technical one I have given to date. It was my first time presenting for a group that actually has "JavaScript" in the name, so it was refreshing not to have to spend the first 15 minutes explaining to the audience that JavaScript is a real language and that you can do serious things with it. By comparison, my second talk went much longer (about an hour and a half?), as there was a lot more material to cover as well as tons of great questions from the (very astute!) audience during my presentation:

The one thing that these presentations do not capture is the plovr demo that I have been giving during my talks. (This was also the first time that I demoed using plovr with jQuery, as I had just attempted using jQuery with plovr for the first time myself the night before. I have an open topic on the plovr discussion group about how to make plovr easier to use for jQuery developers, so please contribute if you have thoughts!) At some point, I'm planning to do a webcast with O'Reilly on Closure, so that might be a good opportunity to record a plovr demo that can be shared with everyone.

Want to learn more about Closure? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps!

Tuesday, January 11, 2011

Web Content Wizard is Fixed AGAIN!

Back in 2006, I released this little tool for Google Calendar called the Web Content Wizard. With it, you can add your own "web content events" in Google Calendar. Unfortunately, there is no way to add such events so via the Google Calendar UI, so normally your only alternative is to do some gnarly web programming to talk to GData.

Things were going well until one day, the Web Content wizard just broke. The GData team changed something on their end which caused my poor little app to stop working. If I remember correctly, it was because they created this new thing about registering your web application. I learned of the problem because some users wrote in, and eventually sometime I finally found the time to debug the problem and fix it.

Then things were going well again for awhile, but then in late 2009 (when I had some newfound free time), I decided that it was finally time to migrate from the Red Hat 9 server it had been running on since 2003! I had a VPS from RimuHosting (who rocks btw!) that I acquired in 2006 for my project with Mike Lambert, but I had never had the time to move my content from over there. (Getting PHP 5 with its native JSON parsing methods was what finally put my over the edge, as I repeatedly failed at compiling them myself for the version of PHP 4 that was running on Shrike.)

As you can imagine, not everything worked when I did the migration in 2009. Honestly, I probably should have just gotten an even newer VPS then and migrated both servers, as now is running on a dusty version of Ubuntu 8.04! (I'm running version 0.9.5 of Mercurial, which means that I don't even have the rebase extension!) Anyway, when I did the migration, I was far more preoccupied with migrating my conf files for Apache from 1.3 to 2.0 than I was with getting the Web Content Wizard to work again. Every endeavor of mine involving GData always involves a minimum of 3 hours work and a lot of cursing, and I just wasn't up for it.

Then today I got another polite email asking me why the Web Content Wizard wasn't working. I have amassed quite a few of these over the years, and I generally star them in my inbox, yet never make the time to investigate what was wrong. But for some reason, tonight I felt motivated, so I dug in and decided to debug it. I [wrongly] assumed that GData had changed on me again, so I focused my efforts there first. Surprisingly, the bulk of my time was spent getting the right error message out of PHP, as all it told me was that my request to the GData server was returning a 400. Now, I know GData is pretty lousy at a lot of things, but when it gives you a 400, in my experience, at least it gives you a reason!

I use cURL under the hood to communicate with GData from PHP, and that has never been pretty. I thought I remembered editing something in /etc/php.ini (which was now at /etc/php5/apache2/php.ini) to get cURL working correctly on the old, but I could not remember what it was, nor did I appear to have documented it.

The key insight occurred when I changed this call to curl_setopt() to set the CURLOPT_FAILONERROR option to false:
curl_setopt($ch, CURLOPT_FAILONERROR, false);
Suddenly, my requests to GData contained a real error message:
The value following "version" in the XML declaration must be a quoted string
This did not make any sense because I was sure that I was sending well-formed XML to GData and it contained the <?xml version="1.0"?> preamble as it should. I decided to print it back out to make sure that things were being sent as I expected, but when I did so, I noticed that all of my double-quotes were inexplicably escaped with backslashes. That explained why the error message was complaining about well-formed XML, but not where the backslashes came from.

Then it dawned on me: php.ini has a bunch of stupid settings with the word magic in them. Sure enough, I did a search for [php HTTP_POST_VARS escape quotes], which eventually led me to the documentation for the PHP function stripslashes(), which finally led me to the documentation for magic_quotes_gpc, which basically says: "Because the average PHP developer is so likely to Little Bobby Tables himself, we put the coding behaviors for adults on a high shelf where little PHP developers can't reach them and automatically backslash escape quotes because said developers are so likely to cram that string directly into a SQL statement without escaping it."

I immediately changed the line in php.ini to magic_quotes_gpc = Off, restarted Apache, and all of a sudden, the Web Content Wizard was fixed AGAIN!

At least until my next server upgrade...

Want to learn more about Closure? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps!

Wednesday, January 5, 2011

Released a new version of plovr so I can show it off at my two Closure lectures this Thursday

I just released a new version of plovr in time for the two talks on Closure I am giving this Thursday. The first is a guest lecture for an MIT web programming competition: 6.470. My understanding is that the course attracts a lot of freshmen, so my hope is that plovr can help those who are new to JavaScript leverage the Closure Compiler to help them identify their JavaScript errors (of which there are likely to be many!). This motivated me to fix some annoying issues in plovr that have been lingering for awhile (which the early adopters have been nice enough to put up with for far too long :).

The second talk I am going to give is part of the Boston JS Meetup, which I am very excited to attend for the first time. I'm looking forward to talking to the group and showing off what both Closure and plovr can offer to those in industry.

Also, up until now, I have been making decisions autonomously about new plovr features and config options because I did not have a good way to gather feedback from users other than the bug reports I received. To that end, I finally decided to create a Google Group for plovr where users can help influence plovr development and ask questions about how to use plovr. As I noted in the Closure Library group, I would like to extend plovr to help with CSS management and developing against jQuery (among other things), and I believe these features will only be useful to others if prospective users participate in the discussion. We'll see how all of that works out.

Finally, one interesting engineering challenge that has plagued plovr since its inception (but has finally been slayed!) is the question of how to manage its dependency on the Closure Library, Closure Compiler, and Closure Templates. With a considerable amount of help from Ilia Mirkin (and a lot of SVN and Hg gymnastics), I finally have a system that I am happy with. Basically, it requires creating a branch for each dependent project to which updates are periodically added and then merged into the main repository. As this is a general problem for open-source projects, I documented it thoroughly on the plovr wiki.

Although it was tricky to set up, it is now possible to build plovr without checking out the other Closure projects. (Previously, a new version of plovr would include my own local checkouts of the closure-library, closure-compiler, and closure-templates repositories, which were at somewhat arbitrary revisions with possibly even more arbitrary/undocumented patches of my own in them.) Further, because plovr has its own clones of the Closure projects in its repository, it is far easier to modify Closure code if it needs to be edited for plovr. For example, it was now trivial to make the constructor of public, whereas previously I would have had to create a completely separate binary for the Closure Compiler with this one modification and then replace it with the one in plovr's lib directory. Further, I would forever have to keep my checkout of the Closure Compiler with the modification around so that I could make further modifications in the future while no outside developer would have any insight into that process -- no more!

This also allowed me to clean up the build process for plovr and remove some unnecessary and/or duplicate dependencies. The release of plovr I did back in December was 11.7 MB, but the new release is down to 8.4 MB, which is the smallest plovr jar I have ever published! Things are definitely moving in the right direction for the project.

Want to learn more about Closure? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps!