Monday, June 27, 2011

The Triumphant Return of TargetAlert!

About seven years ago, my adviser and I were sitting in his office Googling things as part of research for my thesis. I can't remember what we were looking for, but just after we clicked on a promising search result, the Adobe splash screen popped up. As if on cue, we both let out a groan in unison as we waited for the PDF plugin to load. In that instant, it struck me that I could build a small Firefox extension to make browsing the Web just a little bit better.

Shortly thereafter, I created TargetAlert: a browser extension that would warn you when you were about to click on a PDF. It used the simple heuristic of checking whether the link ended in pdf, and if so, it inserted a PDF icon at the end of the link as shown on the original TargetAlert home page.

And that was it! My problem was solved. Now I was able to avoid inadvertently starting up Adobe Reader as I browsed the Web.

But then I realized that there were other things on the Web that were irritating, too! Specifically, links that opened in new tabs without warning or those that started up Microsoft Office. Within a week, I added alerts for those types of links, as well.

After adding those features, I should have been content with TargetAlert as it was and put it aside to focus on my thesis, but then something incredible happened: I was Slashdotted! Suddenly, I had a lot more traffic to my site and many more users of TargetAlert, and I did not want to disappoint them, so I added a few more features and updated the web site. Bug reports came in (which I recorded), but it was my last year at MIT, and I was busy interviewing and TAing on top of my coursework and research, so updates to TargetAlert were sporadic after that. It wasn't until the summer between graduation and starting at Google that I had time to dig into TargetAlert again.

Though the primary reason that TargetAlert development slowed is that Firefox extension development should have been fun, but it wasn't. At the time, every time you made a change to your extension, you had to restart Firefox to pick up the change. As you can imagine, that made for a slow edit-reload-test cycle, inhibiting progress. Also, instead of using simple web technologies like HTML and JSON, Firefox encouraged the use of more obscure things, such as XUL and RDF. The bulk of my energy was spent on getting information into and out of TargetAlert's preferences dialog (because I actually tried to use XUL and RDF, as recommended by Mozilla), whereas the fun part of the extension was taking the user's preferences and applying them to the page.

The #1 requested feature for TargetAlert was for users to be able to define their own alerts (as it were, users could only enable or disable the alerts that were built into TargetAlert). Conceptually, this was not a difficult problem, but realizing the solution in XUL and RDF was an incredible pain. As TargetAlert didn't generate any revenue and I had other personal projects (and work projects!) that were more interesting to me, I never got around to satisfying this feature request.

Fast-forward to 2011 when I finally decommissioned a VPS that I had been paying for since 2003. Even though I had rerouted all of its traffic to a new machine years ago and it was costing me money to keep it around, I put off taking it down because I knew that I needed to block out some time to get all of the important data off of it first, which included the original CVS repository for TargetAlert.

As part of the data migration, I converted all of my CVS repositories to SVN and then to Hg, preserving all of the version history (it should have been possible to convert from CVS to Hg directly, but I couldn't get hg convert to work with CVS). Once I had all of my code from MIT in a modern version control system, I started poking around to see which projects would still build and run. It turns out that I have been a stickler for creating build.xml files for personal projects for quite some time, so I was able to compile more code than I would have expected!

But then I took a look at TargetAlert. The JavaScript that I wrote in 2004 and 2005 looks gross compared to the way I write JavaScript now. It's not even that it was totally disorganized -- it's just that I had been trying to figure out what the best practices were for Firefox/JavaScript development at the time, and they just didn't exist yet.

Further, TargetAlert worked on pre-Firefox 1.0 releases through Firefox 2.0, so the code is full of hacks to make it work on those old versions of the browser that are now irrelevant. Oh, and what about XUL? Well, my go-to resource for XUL back in the day was xulplanet.com, though the site owners have decided to shut it down, which made making sense of that old code even more discouraging. Once again, digging into Firefox extension development to get TargetAlert to work on Firefox 4.0 did not appear to be much fun.

Recently, I have been much more interested in building Chrome apps and extensions (Chrome is my primary browser, and unlike most people, I sincerely enjoy using a Cr-48), so I decided to port TargetAlert to Chrome. This turned out to be a fun project, especially because it forced me to touch a number of features of the Chrome API, so I ended up reading almost all of the documentation to get a complete view of what the API has to offer (hooray learning!).

Compared to Firefox, the API for Chrome extension development seems much better designed and documented. Though to be fair, I don't believe that Chrome's API would be this good if it weren't able to leverage so many of the lessons learned from years of Firefox extension development. For example, Greasemonkey saw considerable success as a Firefox extension, which made it obvious that Chrome should make content scripts an explicit part of its API. (It doesn't hurt that the creator of Greasemonkey, Aaron Boodman, works on Chrome.) Also, where Firefox uses a wacky, custom manifest file format for one metadata file and an ugly ass RDF format for another metadata file, Chrome uses a single JSON file, which is a format that all web developers understand. (Though admittedly, having recently spent a bit of time with manifest.json files for Chrome, I feel that the need for my suggested improvements to JSON is even more compelling.)

As TargetAlert was not the first Chrome extension I had developed, I already had some idea of how I would structure my new extension. I knew that I wanted to use both Closure and plovr for development, which meant that there would be a quick build step so that I could benefit from the static checking of the Closure Compiler. Although changes to Chrome extensions do not require a restart to pick up any changes, they do often require navigating to chrome://extensions and clicking the Reload button for your extension. I decided that I wanted to eliminate that step, so I created a template for a Chrome extension that uses plovr in order to reduce the length of the edit-reload-test cycle. This enabled me to make fast progress and finally made extension development fun again! (The README file for the template project has the details on how to use it to get up and running quickly.)

I used the original code for TargetAlert as a guide (it had some workarounds for web page quirks that I wanted to make sure made it to the new version), and within a day, I had a new version of TargetAlert for Chrome! It had the majority of the features of the original TargetAlert (as well as some bug fixes), and I felt like I could finally check "resurrect TargetAlert" off of my list.

Except I couldn't.

A week after the release of my Chrome extension, I only had eight users according to my Chrome developer dashboard. Back in the day, TargetAlert had tens of thousands of users! This made me sad, so I decided that it was finally time to make the Chrome version of TargetAlert better than the original Firefox verison: I was finally going to support user-defined alerts! Once I actually sat myself down to do the work, it was not very difficult at all. Because Chrome extensions have explicit support for an options page in HTML/JS/CSS that has access to localStorage, building a UI that could read and write preferences was a problem that I have solved many times before. Further, being able to inspect and edit the localStorage object from the JavaScript console in Chrome was much more pleasant than mucking with user preferences in about:config in Firefox ever was.

So after years of feature requests, Target Alert 0.6 for Chrome is my gift to you. Please install it and try it out! With the exception of the lack of translations in the Chrome version of TargetAlert (the Firefox version had a dozen), the Chrome version is a significant improvement over the Firefox one: it's faster, supports user-defined alerts, and with the seven years of Web development experience that I've gained since the original, I fixed a number of bugs, too.

Want to learn more about web development and Closure? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps!

Thursday, June 9, 2011

It takes a village to build a Linux desktop

tl;dr Instead of trying to build your own Ubuntu PC, check out a site like system76.com instead.

In many ways, this post is more for me than for you—I want to make sure I re-read this the next time I am choosing a desktop computer.

My Windows XP desktop from March 2007 finally died, so it was time for me to put together a new desktop for development. Because Ubuntu on my laptop had been working out so well, I decided that I would make my new machine an Ubuntu box, too. Historically, it was convenient to have a native Windows machine to test IE, Firefox, Chrome, Safari, and Opera, but Cygwin is a far cry from GNOME Terminal, so using Windows as my primary desktop environment had not been working out so well for me.

As a developer, I realize that my computer needs are different from an ordinary person's, but I didn't expect it would be so difficult to buy the type of computer that I wanted on the cheap (<$1000). Specifically, I was looking for:

  • At least 8GB of RAM.
  • A 128GB solid state drive. (I would have been happy with 64GB because this machine is for development, not storing media, but Jeff Atwood convinced me to go for 128GB, anyway.)
  • A video card that can drive two 24" vertical monitors (I still have the two that I used with my XP machine). Ideally, the card would also be able to power a third 24" if I got one at some point.
  • A decent processor and motherboard.
I also wanted to avoid paying for:
  • A Windows license.
  • A CD/DVD-ROM drive.
  • Frivolous things.
I did not think that I would need a CD-ROM drive, as I planned to install Ubuntu from a USB flash drive.

I expected to be able to go to Dell or HP's web site and customize something without much difficulty. Was I ever wrong. At first, I thought it was going to be easy as the first result for [Dell Ubuntu] looked very promising: it showed a tower starting at $650 with Ubuntu preinstalled. I started to customize it: upgrading from 4GB to 8GB of RAM increased the price by $120, which was reasonable (though not quite as good as Amazon). However, I could not find an option to upgrade to an SSD, so buying my own off of Newegg would cost me $240. Finally, the only options Dell offered for video cards were ATI, and I have had some horrible experiences trying to get dual monitors to work with Ubuntu and ATI cards in the past (NVIDIA seems to be better about providing good Linux drivers). At this point, I was over $1000, and was not so sure about the video card, so I started asking some friends for their input.

Unfortunately, I have smart, capable friends who can build their own machines from parts who were able to convince me that I could, too. You see, in general, I hate dealing with hardware. For me, hardware is simply an inevitable requirement for software. When software goes wrong, I have some chance of debugging it and can attack the problem right away. By comparison, when hardware goes wrong, I am less capable, and may have to wait until a new part from Amazon comes in before I can continue debugging the problem, which sucks.

At the same time, I realized that I should probably get past my aversion to dealing with hardware, so I started searching for blog posts about people who had built their own Ubuntu boxes. I found one post by a guy who built his PC for $388.95, which was far more than the Dell that I was looking at! Further, he itemized the parts that he bought, so at least I knew that if I followed his steps, I would end up with something that worked with Ubuntu (ending up with hardware that was not supported by Ubuntu was one of my biggest fears during this project). I cross-checked this list with a friend who had recently put together a Linux machine with an Intel i7 chip, and he was really happy with it, so I ended up buying it and the DX58SO motherboard that was recommended for use with the i7. This made things a bit pricier than they were in the blog post I originally looked at:

Motherboard Intel DX58SO Extreme Series X58 ATX Triple-channel DDR3 16GB SLI
$269.99
CPU Intel Core i7 950 3.06GHz 8M L3 Cache LGA1366 Desktop Processor
$234.99
RAM Corsair XMS3 4 GB 1333MHz PC3-10666 240-pin DDR3 Memory Kit for Intel Core i3 i5 i7 and AMD CMX4GX3M1A1333C9
(2x$39.99) $79.98
Case Cooler Master Elite 360 RC-360-KKN1-GP ATX Mid Tower/Desktop Case (Black)
$39.99
Hard Drive Crucial Technology 128 GB Crucial RealSSD C300 Series Solid State Drive CTFDDAC128MAG-1G1
$237.49
Power Supply Corsair CMPSU-750HX 750-Watt HX Professional Series 80 Plus Certified Power Supply compatible with Core i7 and Core i5
$109.99
Sales Tax $55.47
Total $1027.90

At this point, I should have acknowledged that what I had put together (on paper) was now in the price range of what I was originally looking at on Dell's web site. Unfortunately, I had mentally committed to being a badass and building a machine at this point, so I forged ahead with my purchase.

I also should have acknowledged that this list of parts did not include a video card...

In a few days, everything had arrived and I started putting everything together as best as I could figure out. I tried following the assembly instructions verbatim from the manuals, but that proved to be a huge mistake, as the suggested assembly order was not appropriate for my parts. For example, the case instructions recommended that I install the power supply first, then the motherboard, though that ended up making the SATA connectors inaccessible, so I had to remove the motherboard, then the power supply, plug in the SATA cables, and then put everything back together again. (This is one of many examples of exercises like this that I went through.)

When I was close to having something that I thought would boot, I finally accepted the fact that I had failed to order a video card, so I tried using the one from my XP machine in hopes that it would allow me to kick off the Ubuntu installation process, and then I could walk to Best Buy to purchase a new video card. In the months leading up to the death of my XP machine, I had a lot of problems with my monitors, so it should have been no surprise that my installation screen looked like this:

Regardless, this allowed me to run the memory test (which runs forever, by the way—I let this run for hours before I decided to investigate why it never stopped) while I went off to Best Buy. Because I had already convinced myself that an NVIDIA card would work better with Ubuntu than an ATI, I went ahead and bought this badass looking video card: the EVGA GeForce GTX 550 Ti. It was not cheap ($250 at Best Buy, though it would have been $125 on Amazon), but I was a man on a mission, so nothing could stop me.

Once I got home and dropped the card in, I had a problem that I never would have anticipated: the video card did not fit in my case. Specifically, the card fit in my case, but it was so close to the power supply that there was not enough room to plug in the separate power connector that the video card needed. At that point, I was both desperate and determined, so I took out the power supply and tried to wrench off the casing to create a big enough hole so there would be enough room behind the video card to plug it in. As you can see, I did succeed in removing quite a bit of metal from the power supply (and most definitely voided the warranty):

Despite my handiwork with a pair of pliers, I was unable to remove enough metal from the power supply to create enough space to power the video card, so it would have to go back to Best Buy. I decided to search for [ubuntu 11.04 best video card] to find something that I could overnight from Amazon. I followed the links from this blog post, and decided to go with the ZOTAC nVidia GeForce 9800GT, which was $102.32 after tax and overnight shipping. One of the main selling points for me was the following comment in one of the Amazon reviews: "Another advantage of this card is that it DOES NOT require your power supply to have a video card power connector." Although I was originally hoping to get a card with two DVI inputs (instead of one DVI and one VGA), I really just wanted something that would work at that point.

While I was waiting for the new video card to arrive, I tried installing Linux on my SSD with my half-assed video card. Although it seemed like it was going to install off of the USB flash drive on Day 1, my PC did not seem to want to accept it on Day 2. I spent some time Googling for "@ogt error" that morning (because that is what I saw on my screen), until I realized that it was actually saying "boot error" and my video card was just garbling the characters. I rewrote the USB drive with all sorts of different ISOs, and I started to wonder whether buying the cheapest flash drive at Best Buy (it was $8 for 4GB!) was a mistake. I then tried the USB drive on another Windows machine I had lying around, and it took, at which point I was really stumped. Again, I asked some friends what they thought, and they recommended installing from a CD, as that was much more reliable.

As you may recall, buying a CD-ROM drive was something I had avoided, so what could I do? I tried reusing the one from my old Dell, but that turned out to be a non-starter because it required an IDE connector rather than a SATA one. Instead, I hoofed it back to Best Buy to "borrow" a drive for 24 hours or so. This was one of my better decisions during this entire process, as installing from a CD-R that I burned with an 11.04 ISO worked flawlessly.

Once my video card finally came in and Ubuntu was installed, I finally felt like I was almost done! Of course, I was wrong. Getting two 24" monitors working in portrait mode turned out to be quite a challenge. The first step was to install the NVIDIA drivers for Ubuntu. Originally I downloaded binaries from the web site, but that broke everything. Fortunately a friend helped me figure out how to uninstall the binary driver (sudo ./NVIDIA-Linux-x86_64-270.41.19.run --uninstall) and replace it with a proper package (sudo apt-get install nvidia-current). Now things were working fine with one monitor in landscape mode, but the jump to portrait was more challenging.

Initially, I tried to do everything through the NVIDIA GUI on Ubuntu, but it did not present an option to rotate the monitor. I found a blog post that recommended adding Option "Rotate" "CCW" to /etc/X11/xorg.conf, which indeed helped me get the first monitor working in portrait mode. I was able to add the second monitor via the NVIDIA GUI and edited xorg.conf again to rotate it. At this point, everything looked great except that I could not drag windows from one monitor to another. To do that, I had to enable "TwinView" in the NVIDIA GUI, which did enable me to drag windows across screens, except NVIDIA insisted that the cursor flow from the bottom of the left monitor to the top of the right monitor instead of horizontally across. I did many Google searches to try to find a simple solution, but I had no luck. Ultimately, I ended up reading up on xorg.conf until I understood it enough to edit it by hand to get things to work. At long last, everything was working!

The final step was to get everything in the case. This was a little tricky because my power supply came with a ton of cables, and wedging them in such that they did not block any of the three exposed fans was non-trivial. Further, the case did not have a proper slot for an SSD, so I ended up buying a 3.5 to 2 X 2.5-Inch Bay Converter to hold the SSD in place. Unfortunately, the case had weird screw holes such that it was impossible to secure the converter in place, but fortunately the top fan prevents the bay from falling into the rest of the case, so it seems good enough. Considering that I already have a power supply with a gaping hole in it and a mess of cables, this did not seem like my biggest concern.

So what have I learned? Primarily, I learned that I should never do this again, but if I had to, I would be much less afraid to mess with hardware the next time around. Including the cost of the video card and the bay converter, I spent $1138.92 in cash and about two days worth of my time. Most of that two days was spent being angry and frustrated. When I was just about finished with the entire project, I noticed an ad on one of the blog posts I used for help to a site I had not heard of before: system76.com. Apparently they sell Ubuntu laptops, desktops, and servers, and they have a product line called Wildebeest Performance that I could customize to basically exactly what I said I wanted at the outset of this project. On system76.com, a machine with Ubuntu 11.04, an i7 processor, 8GB of RAM, a 120GB SSD, and an NVIDIA card with two inputs (one DVI, one VGA) costs $1117.00, which is less than what I paid when buying the parts individually. Obviously buying the machine directly would have been a huge time savings, and I'm sure the inside of the system76 PC would not be nearly as sloppy as mine. In this, and in many other things, I need to have more patience and do more research before diving into a project. It can save a lot of time and sanity in the long run.