Tuesday, August 19, 2014

Hacking on Atom Part I: CoffeeScript

Atom is written in CoffeeScript rather than raw JavaScript. As you can imagine, this is contentious with “pure” JavaScript developers. I had a fairly neutral stance on CoffeeScript coming into Atom, but after spending some time exploring its source code, I am starting to think that this is not a good long-term bet for the project.

Why CoffeeScript Makes Sense for Atom

This may sound silly, but perhaps the best thing that CoffeeScript provides is a standard way to declare JavaScript classes and subclasses. Now before you get out your pitchforks, hear me out:

The “right” way to implement classes and inheritance in JavaScript has been of great debate for some time. Almost all options for simulating classes in ES5 are verbose, unnatural, or both. I believe that official class syntax is being introduced in ES6 not because JavaScript wants to be thought of as an object-oriented programming language, but because the desire for developers to project the OO paradigm onto the language today is so strong that it would be irresponsible for the TC39 to ignore their demands. This inference is based on the less aggressive maximally minimal classes proposal that has superseded an earlier, more fully-featured, proposal, as the former states that “[i]t focuses on providing an absolutely minimal class declaration syntax that all interested parties may be able to agree upon.” Hooray for design by committee!

Aside: Rant

The EcmaScript wiki is the most frustrating heap of documentation that I have ever used. Basic questions, such as, “Are Harmony, ES.next, and ES6 the same thing?” are extremely difficult to answer. Most importantly, it is impossible to tell what the current state of ES6 is. For example, with classes, there is a proposal under harmony:classes, but another one at Maximally Minimal Classes. Supposedly the latter supersedes the former. However, the newer one has no mention of static methods, which the former does and both Traceur and JSX support. Perhaps the anti-OO folks on the committee won and the transpilers have yet to be updated to reflect the changes (or do not want to accept the latest proposal)?

In practice, the best information I have found about the latest state of ES6 is at https://github.com/lukehoban/es6features. I stumbled upon that via a post in traceur-compiler-discuss that linked to some conference talk that featured the link to the TC39 member’s README.md on GitHub. (The conference talk also has an explicit list of what to expect in ES7, in particular async/await and type annotations, which is not spelled out on the EcmaScript wiki.) Also, apparently using an entire GitHub repo to post a single web page is a thing now. What a world.

To put things in perspective, I ran a git log --follow on some key files in the src directory of the main Atom repo, and one of the earliest commits I found introducing a .coffee file is from August 24, 2011. Now, let’s consider that date in the context of modern JavaScript transpiler releases:

As you can see, at the time Atom was spinning up, CoffeeScript was the only mature transpiler. If I were starting a large JavaScript project at that time (well, we know I would have used Closure...) and wanted to write in a language whose transpilation could be improved later as JavaScript evolved, then CoffeeScript would have made perfect sense. Many arguments about what the “right JavaScript idiom is” (such as how to declare classes and subclasses) go away because CoffeeScript is more of a “there’s only one way to do it” sort of language.

As I mentioned in my comments on creating a CoffeeScript for Objective-C, I see three primary benefits that a transpiled language like CoffeeScript can provide:

  • Avoids boilerplate that exists in the target language.
  • Subsets the features available in the source language to avoid common pitfalls that occur when those features are used in the target language.
  • Introduces explicit programming constructs in place of unofficial idioms.

Note that if you have ownership of the target language, then you are in a position to fix these things yourself. However, most of us are not, and even those who are may not be that patient, so building a transpiler may be the best option. As such, there is one other potential benefit that I did not mention in my original post, but has certainly been the case for CoffeeScript:

  • Influences what the next version of the target language looks like.

But back to Atom. If you were going to run a large, open source project in JavaScript, you could potentially waste a lot of time trying to get your contributors to write JavaScript in the same way as the core members of the project. With CoffeeScript, there is much less debate.

Another benefit of using CoffeeScript throughout the project is that config files are in CSON rather than JSON. (And if you have been following this blog, you know that the limitations of JSON really irritate me. Go JSON5!) However, CSON addresses many of shortcomings of JSON because it supports comments, trailing commas, unquoted keys, and multiline strings (via triple-quote rather than backslashes). Unfortunately, it also supports all of JavaScript as explained in the README:

“CSON is fantastic for developers writing their own configuration to be executed on their own machines, but bad for configuration you can't trust. This is because parsing CSON will execute the CSON input as CoffeeScript code...”
Uh...what? Apparently there’s a project called cson-safe that is not as freewheeling as the cson npm module, and it looks like Atom uses the safe version. One of the unfortunate realities of the npm ecosystem is that the first mover gets the best package name even if he does not have the best implementation. C’est la vie.

Downsides of Atom Using CoffeeScript

I don’t want to get into a debate about the relative merits of CoffeeScript as a language here (though I will at some other time, I assure you), but I want to discuss two practical problems I have run into that would not exist if Atom were written in JavaScript.

First, many (most?) Node modules that are written in CoffeeScript have only the transpiled version of the code as part of the npm package. That means that when I check out Atom and run npm install, I have all of this JavaScript code under my node_modules directory that has only a vague resemblance to its source. If you are debugging and have source maps set up properly, then this is fine, but it does not play nice with grep and other tools. Although the JavaScript generated from CoffeeScript is fairly readable, it is not exactly how a JavaScript developer would write it by hand, and more importantly, single-line comments are stripped.

Second, because Atom is based on Chrome and Node, JavaScript developers writing for Atom have the benefit of being able to rely on the presence of ES6 features as they are supported in V8. Ordinary web developers do not have this luxury, so it is very satisfying to be able to exploit it! However, as ES6 introduces language improvements, they will not be available in CoffeeScript until CoffeeScript supports them. Moreover, as JavaScript evolves into a better language (by copying features from language like CoffeeScript), web developers will likely prefer “ordinary” JavaScript because it is likely that they will have better tool support. As the gap between JavaScript and CoffeeScript diminishes, the cost of doing something more nonstandard (i.e., using a transpiler) does not outweigh the benefits as much as it once did. Arguably, the biggest threat to CoffeeScript is CoffeeScript itself!

Closing Thoughts

Personally, I plan to develop Atom packages in JavaScript rather than CoffeeScript. I am optimistic about where JavaScript is going (particularly with respect to ES7), so I would prefer to be able to play with the new language features directly today. I don’t know how long my code will live (or how long ES6/ES7 will take), but I find comfort in knowing that I am less likely to have to rewrite my code down the road when JavaScript evolves. Finally, there are some quirks to CoffeeScript that irk me enough to stick with traditional JavaScript, but I’ll save those for another time.

Hacking on Atom Series

I have started to spend some time hacking on Atom, and I wanted to share some of my thoughts and learnings from this exploration. I have not posted on my blog in awhile, and this seems like the right medium to document what I have discovered (i.e., too long for a tweet; not profound enough for an essay; failing to answer in the form of a question suitable for the Atom discussion forum).

In the best case, in places where I have stumbled with Atom’s design or implementation, I hope that either (1) someone sets me straight on best practices and why things work the way they do, or (2) to spur discussion on how to make things better. Hacking on Atom is a lot of fun, but I still have a lot to learn.

Monday, October 28, 2013

Today I am making Appendix B of my book available for free online

Today I am making one of the appendices of my book available for free: Appendix B: Frequently Misunderstood JavaScript Concepts. You might expect that appendices are just extra junk that authors stick at the end of the book to up their page count, like reference material that is readily available online. And if that were the case, this would be an uncharitable and worthless thing to do.

As it turns out, many have told me that Appendix B has been the most valuable part of the book for them, so I assure you that I am not releasing the "dregs" of the text. Unsurprisingly, many many more folks are interested in JavaScript in general than are interested in Closure Tools, which is presumably why this appendix has been a favorite.

Because the manuscript was written in DocBook XML, it was fairly easy to programmatically translate the XML into HTML. For some reason, I do not have a copy of the figure from Appendix B that appears in the book, so I had to recreate it myself. Lacking real diagramming tools at the time, I created the original illustration using Google Docs. It took several rounds with the illustrator at O'Reilly to get it reproduced correctly for my book. Since I was too embarrassed to include the Google Docs version in this "HTML reprint," I redid it using Omnigraffle, which I think we can all agree looks much better.

In some ways, the HTML version is better than the print or ebook versions in that (1) the code samples are syntax highlighted, and (2) it is possible to hyperlink to individual sections. Depending on how this is received, I may make more chunks of the book available in the future. As I mentioned, the script to convert the XML to HTML is already written, though it does contain some one-off fixes that would need to be generalized to make it reusable for the other chapters.

If you want to read the rest of Closure: The Definitive Guide (O'Reilly), you can find it on Amazon and other web sites that sell dead trees.

Saturday, August 17, 2013

Hello, World!

Traditionally, "Hello, World!" would be the first post for a new blog, but this is post #100 for the Bolinfest Changeblog. However, I have posted so infrequently in the past year and a half that I thought it would be nice to check in and say hello to report on what I have been up to, as well as some of the projects that I maintain, as I have not been able to respond to all of the email requests for status updates. I'll start with the big one:

I moved to California and joined Facebook

The latter was not the cause for the former. I moved to California to co-found a startup (check!), but after killing myself for three months, I decided that the combination of working on consumer software that served millions (now billions) of users and getting a stable paycheck was something I really enjoyed. Now that I was in California, there were more companies where I could do that then there were in New York City, so I shopped around a bit. After working at Google for so long and feeling like Facebook was The Enemy/technically inferior (they write PHP, right?), I never thought that I would end up there.

Fortunately, I decided to test my assumptions about Facebook by talking to engineers there who I knew and greatly respected, and I learned that they were quite happy and doing good work. Facebook in 2012 felt more like the Google that I joined in 2005 than the Google of 2012 did, so I was also excited about that. Basically, Facebook is a lot smaller than Google, yet I feel like my potential for impact (both in the company and in the world) is much larger. I attended Google I/O this year, and although I was in awe of all of the announcements during the keynote, I imagined that whatever I could do at Google would, at best, be one tiny contribution to that presentation. Personally, I found the thought of feeling so small pretty demoralizing.

Fast-forward to life at Facebook, and things are exciting and going well. "Done is better than perfect" is one of our mantras at Facebook, which encourages us to move fast, but inevitably means that there is always some amount of shit that is broken at any given time. Coming from Google, this was pretty irritating at first, but I had to acknowledge that that mentality is what helped Facebook get to where it is today. If you can be Zen about all of the imperfections, you can find motivation in all the potential impact you can have. At least that's what I try to do.

Using Outlook instead of Gmail and Google Calendar is infuriating, though.

I spend my time writing Java instead of JavaScript

In joining a company that made its name on the Web, and knowing a thing or two about JavaScript, I expected that I would be welcomed into the ranks of UI Engineering at Facebook. Luckily for me, that did not turn out to be the case.

At Facebook, there are many web developers who rely on abstractions built by the core UI Engineering (UIE) team. When I joined, my understanding was that if you wanted to be a UIE, first you needed to "pay your dues" as a web developer for at least a year so that you understood the needs of the Web frontend, and then perhaps you could be entrusted as a UIE. In general, this seems like a reasonable system, but all I was thinking was: "I made Google Calendar and Google Tasks work in IE6. I have paid my fucking dues."

Fortunately, some friends steered me toward joining mobile at Facebook, it being the future of the company and whatnot. At first, I was resistant because, to date, I had built a career on writing JavaScript, so I was reluctant to lay that skillset by the wayside and learn a new one. I also realized that that line of thinking is what would likely turn me into a dinosaur one day, so I needed to resist it.

Okay, so Android or iOS? I had just gotten a new iPhone after living miserably on a borrowed Nexus One for the previous six months (my 3GS only lasted 75% of the way through my contact), so the thought of doing Android development, which would require me to live with an Android phone every day again was pretty distasteful. However, my alternative was to write Objective-C (a language where you have to manage your own memory and suffers from dissociative identity disorder, as it has dual C and obj-C constructs for every programming concept) in an IDE I cannot stand (Eclipse figured out how to drag-and-drop editor tabs about a decade ago, why does XCode make organizing my windows so difficult?) on a platform that irritates me (every time the End key takes me to the end of the document instead of the end of the line, I want to shoot someone). As an Android developer, at least I could use Linux and write in Java (a language that I should probably be ashamed to admit that I enjoy, but whose effectiveness in programming in the large continues to make me a supporter).

Historically, the iPhone has always been a beloved piece of hardware (and is the device app developers flock to), whereas Android devices have been reviled (anyone remember the G1?). The same was true at Facebook when I joined: more engineers owned iPhones themselves (for a long time, it felt like the only people who owned Android phones were Google employees who had been given them for free, though obviously that has changed...), so more of them wanted to work on the iPhone app, and hence Facebook's iPhone app was a lot better than its Android app. Again, this is what made the possibility of working on Android simultaneously frustrating and intriguing: the codebase was a nightmare, but there was a huge opportunity for impact. For my own career, it was clear that I should learn at least one of Android or iOS, and Android seemed like the one where I was likely to have the most success, so that's what I went with.

But like I said, the codebase was a nightmare. What was worse was that building was a nightmare. We were using a hacked up version of the standard Ant scripts that Google provides for Android development. Like many things at Facebook, our use case is beyond the scale of ordinary use cases, so the "standard" solution does not work for us. We actually had many Ant scripts that called one another, which very few people understood. As you can imagine, developers' builds frequently got into bad states, in which case people could not build, or would do a clean build every time, which is very slow. In sum, Android development was not fun.

After dealing with this for a couple of weeks and wondering what I had gotten myself into, I said, "Guys, this is ridiculous: I am going to create a new build system. We can't keep going like this." As you can imagine, there were a lot of objections:

  • "Why can't we use an existing build system like Ninja or Gradle?"
  • "What happens when Google fixes its build system? Then we'll have to play catch up."
  • "Would you actually support your custom build system long-term?"
  • "Seriously, why do engineers always think they can build a better build system?"
Resounding support, I had not.

At that point, I had no credibility in Android, and more importantly, no credibility at Facebook. If I decided to go against the grain and my build system was a failure, I would not have a any credibility in the future, either. On the other hand, there was no way I was going to endure life as an Android developer if I had to keep using Ant, so I forged ahead and started working on my own build system.

In about a month and a half, I built and deployed Buck, an Android build system. Builds were twice as fast, so the number of objections decreased considerably. Life at Facebook was, and continues to be, very good.

I spoke at some conferences...

Thus far in 2013, I gave two external talks at conferences, two internal talks at Facebook, and one best man's speech. Public speaking really stresses me out, yet I continue to volunteer to do it. (The best man's speech was the shortest of the five, but probably the most stressful. It's a friendly audience, but expectations are high!) I used to think it was weird when I read that actors like John Malkovich often avoid ever seeing the movies that they are in, but given that I rarely watch recordings of talks I have given, I guess I can understand why. Like listening to your own voice on tape, it always seems worse to you than to anyone else.

This year, for the first time, I was invited to speak at a conference. The name of the conference was mloc.js (mloc = Millon Lines of Code), and the focus was on scaling JavaScript development. Both programming in the large and JavaScript development are topics near and dear to my heart, so this seemed like a great fit for me. I was also honored to be invited to speak anywhere, let alone a foreign country (the conference was in Budapest), so I accepted.

Since some nice group of people was paying me to fly to their country and stay in a hotel, I felt that I owed it to them to give a presentation that did not suck. I also felt like they deserved something new, so rather than rehash one of my talks from 2011 promoting Google Closure, I decided to create a new talk from scratch. The two topics that came to mind were optional typing and async paradigms in JavaScript. Initially, I sketched out one talk about both topics, but I quickly realized that I would only have time for one of the two. Although I feel like someone needs to go in and slap the Node community around until they stop creating APIs based on callbacks instead of Promises (in hopes that we ultimately get an await keyword in JavaScript that takes a Promise), I decided that I could deliver a more thought-provoking talk about optional typing that included references to Legoes and OCaml. And so that's what I did:

(The title of the talk was Keeping Your Code In Check, which is not very specific to what I actually spoke about. Often, conference organizers want talk names and abstracts early in the process so they can make schedules, create programs, etc. In contrast, conference presenters want to do things at the last minute, so I supplied an amorphous title for my talk, figuring that whatever I ended up speaking about could fit the title. I'd say that it worked.)

I spent hours and hours working on this talk. I was so exhausted when I got to the end of the presentation that I botched some of the Q&A, but whatever. The slides are available online, but I always find it hard to get much from a deck alone, and watching the full video takes a long time. In the future, I hope to convert the talk into an essay that is easier (and faster) to consume.

I tried my best not to make Google Closure the focus of my talk, instead trying to expose people to the tradeoffs of various type systems. Google Closure was mainly in there because (1) I did not have to spend any time researching it, (2) it is one of the few successful examples of optional typing out there, and (3) if I sold you on optional typing during my talk, then Google Closure would be something that you as a JavaScript developer could go home and use. I even gave TypeScript some airtime during my talk (though I omitted Dart, since Google was a sponsor of the conference, so they had their chance to pitch), so I thought that was pretty fair. By comparison, if you look at the conference schedule, almost everyone else in the lineup was shilling their own technology, which might be the perfect solution for a handful of members of the audience, but was less likely to be of universal interest. That was my thinking, anyway.

...which included open sourcing the Android build tool that I wrote (Buck)

The second conference I talk I gave was at Facebook's own mobile developer conference in New York City. As I mentioned above, the whole custom Android build system thing worked out pretty well internally, so we deemed it worthy to open source it. This conference seemed like a good vehicle in which to do so, so once we launched Facebook Home on April 4 (which I worked on!), I focused 100% on my conference talk on April 18th.

Though the presentation itself was only a small portion of what I needed to do. We had to clean up the code, excise any comments that referred to secret, internal stuff, and generate sufficient documentation so that people could actually use the thing. For once, I successfully delegated, and everything got done on time, which was great. The only downside was that I was not able to spend as much time rehearsing my talk as I would have liked, but I think that it came out okay:

Note that the title of the talk was How Facebook builds Facebook for Android, which does not mention anything about Buck or a build system at all. This is because we decided to make the open sourcing a surprise! I asked whether I should do "the big reveal" at the beginning or the end of the talk. I was advised that if I did it at at the beginning, then people would stop paying attention to me and start reading up on it on their laptops. If I did it at the end, then I would not leave myself any time to talk about Buck. So clearly, the only reasonable alternative was to "do it live," which is what I ended up doing. If you fast-forward to around 22 minutes in, you can see my "showmansip" (that term definitely deserves to be in quotes) where I get the audience to ask me to flip the switch in GitHub from private to public. It was nerve-wracking, but great fun.

I updated plovr

With so much going on professionally, and my new focus on Android, it has not left much time for plovr. This makes me sad, as the October 2012 release of plovr was downloaded over 9000 times, so that's a lot of developers' lives who could be improved by an updated release. Last month, I finally put out a new release. Part of the reason this took so long is that, in the past 18 months, Oracle dropped support for Java 6 and Google moved some of the SVN repos for Closure Tools over to Git. Both of these things broke the "turn the crank" workflow I had put in place to facilitate turning out new versions of plovr with updated versions of Closure Tools. I finally created a new workflow, and as part of that work, I moved plovr off of Google Code and onto GitHub. I have already accepted a couple of pull requests, so this seems like progress.

Admittedly, now that I am writing Java rather than JavaScript every day, I don't have as much skin in the game when it comes to moving plovr forward. I am hoping to delegate more of that responsibility to those who are using plovr actively. I have some specific individuals in mind, though unfortunately have not to made the effort to reach out quite yet.

Chickenfoot

Chickenfoot is the venerable end-user programming tool for customizing and automating the Web that I built as part of my Master's thesis at MIT in 2005. It was built as a Firefox extension, as Firefox gave us a unique way to make invasive modifications to a real web browser, while also providing a distribution channel to share our experimental changes with ordinary people (i.e., non-programmers). This was pretty spectacular.

The last official release of Chickenfoot (1.0.7) by MIT was in 2009, and targeted Firefox 3. As the current release of Firefox is version 23.0.1, clearly Chickenfoot 1.0.7 is a bit behind. After the NSF grant for Chickenfoot at MIT ran out (which is what effectively ended "official" development), I moved the source code to GitHub and got some help to release Chickenfoot 1.0.8, which was verified to work on Firefox 4.0-7.0. It may very well also work on Firefox 23.0.1, but I have not kept up.

Historically, we hosted Chickenfoot downloads on MIT's site (instead of Mozilla's offical Add-ons site) because we wanted to control our releases. My experience with submitting an extension to the Mozilla Add-ons site has been no better than my experience with submitting an iOS app to Apple's app store, in that a review on the vendor's end is required, and rejections are frustrating and do happen. Since we were not going to host releases on MIT's site anymore, I decided to give the Add-ons store another try. Here are the contents of the rejection letter from January 19, 2012 (emphasis is mine):

Your add-on, Chickenfoot 1.0.9, has been reviewed by an editor and did not meet the criteria for being hosted in our gallery. Reviewer: Nils Maier Comments: Your version was rejected because of the following problems: 1) This version contains binary, obfuscated or minified code. We need to review all of your source code in order to approve it. Please send the following items to amo-admin-reviews@mozilla.org: * A link to your add-on listing. * Whether you're nominating this version for Full Review or Preliminary Review. * The source files of your add-on, either as an attachment or a link to a downloadable package. * For updates, including the source code of the previously approved version or a diff is also helpful. You can read our policies regarding source code handling here: https://addons.mozilla.org/en-US/developers/docs/policies/reviews#section-binary. 2) Your add-on uses the 'eval' function to parse JSON strings. This can be a security problem and we normally don't allow it. Please read https://developer.mozilla.org/en/Using_JSON_in_Firefox for more information about safer alternatives. 3) Your add-on uses the 'eval' function unnecessarily, which is something we normally don't accept. There are many reasons *not* to use 'eval', and also simple alternatives to using it. You can read more about it here: https://developer.mozilla.org/en/XUL_School/Appendix_C:_Avoid_using_eval_in_Add-ons 4) Your add-on creates DOM nodes from HTML strings containing unsanitized data, by assigning to innerHTML or through similar means. Aside from being inefficient, this is a major security risk. For more information, see https://developer.mozilla.org/en/XUL_School/DOM_Building_and_HTML_Insertion (All of the above is already marked by the validator as warnings. I stopped at this point; there might be more issues) Please fix them and submit again. Thank you. This version of your add-on has been disabled. You may re-request review by addressing the editor's comments and uploading a new version. To learn more about the review process, please visit https://addons.mozilla.org/developers/docs/policies/reviews#selection If you have any questions or comments on this review, please reply to this email or join #amo-editors on irc.mozilla.org

An underlying principle of Chickenfoot is to empower an end-user to be able to write a script, which we would eval(), and that script could modify whatever the user wanted. Clearly, our goals flew in the face of the security model that extension reviewers at Mozilla are trying to maintain. The thought of trying to clean things up and then convince a reviewer that what we were doing was legitimate seemed overwhelming. (Also, the code had been bastardized by graduate students over the years, so the thought of me trying to decipher it after seven years of writing JavaScript in an industry setting was equally overwhelming.)

(This experience is what engendered my (arguably ill-advised) quip to Mike Shaver during my first week at Facebook: "I used to like Firefox." I didn't know that, prior to Facebook, Mike had been the VP of Engineering at Mozilla, but I'm pretty sure that everyone within earshot did. Awk-ward. I think we spent the next hour or two complaining about calendaring standards and timezones. We're friends now.)

Given the uphill battle of getting Chickenfoot into the Add-ons site coupled with the general need for a full-on rewrite, people sometimes ask: when are we going to port Chickenfoot to Chrome? To be honest, I am not sure whether that's possible. For starters, Chickenfoot leverages Thread.sleep() from XPCOM so that we can do a busy-wait to suspend JavaScript execution while pages are loading, which enables end-user programmers to write linear code. (I think we would have to run a JavaScript interpreter written in JavaScript to get around this, which is doable, albeit a bit crazy.) Further, for security reasons (or maybe for forward/backward-compatibility reasons), the APIs available to browser extensions are historically more restricted in Chrome than they are in Firefox. (Last I checked, Mozilla would not allow you to declare your extension compatible with all future versions of Firefox, so you had to specify a maxVersion and update your extension with every new Firefox release, which drove me insane.) In Firefox, the entire browser is scriptable, so if you can get a reference to any part of the Firefox UI in JavaScript, you can basically do whatever you want with it. This has not been my experience with Chrome. (On the bright side, I have never seen an instance where a Chrome user has lost half of his screen real-estate to browser toolbars, which I have certainly seen in both Firefox and more notably, Internet Explorer.)

It actually breaks my heart when I get an email from an end-user programmer who asks for a new version of Chickenfoot. Unlike a full-fledged programmer, most of these folks don't have the tools or knowledge in order to build an alternative solution. Some of them continue to run Firefox 2.0 or 3.0 just so the workflow they created around Chickenfoot still works. I would love to have a clean Chickenfoot codebase from which I could pop out an up-to-date extension, a piece of software that leverages everything I have learned about development over the past decade, but I don't have the cycles.

What's next?

At Facebook, I am extremely excited about the work that we're doing on Buck, and on mobile in general. We are consistently making Android developers' lives better, and we are frequently exporting these improvements to the community. I expect to continue working on Buck until we run out of ideas for how we can make developers more efficient, or at least until we have enough support for Buck in place that I feel comfortable focusing on something else. I should probably also mention that we're most definitely hiring at Facebook, so if you're also interested in working on a high-quality, open source build tool, then please drop me a line.

Saturday, May 18, 2013

Chromebook Pixel gives me an excuse to fork JSNES

For a long time, I have been intrigued by NES emulators. I was extremely excited when Ben Firshman released an NES emulator in JavaScript (JSNES) over three years ago. At the time, I noted that JSNES ran at almost 60fps in Chrome, but barely trickled along in Firefox. It's pretty shocking to see that that is still the case today: in Firefox 21.0 on Ubuntu, I am seeing at most 2fps while sitting idle on the title screen for Dr. Mario. That's pretty sad. (It turns out I was getting 1fps because I had Firebug enabled. Maybe that's why my perception of Firefox has diminished over time. Someone at Mozilla should look into that...) This is the only browser benchmark I care about these days.

When the Gamepad API was first announced for Chrome, I tried to get my USB NES RetroPort controllers to work, but Chrome did not seem to recognize them. I made a mental note to check back later, assuming the API would eventually be more polished. Fast-forward to this week where I was fortunate enough to attend Google I/O and score a Chromebook Pixel. It seemed like it was time to give my controllers another try.

Last night, I plugged a RetroPort into the Pixel and visited the Gamepad API test page, and it worked! Obviously the next thing I had to do was wire this up to JSNES, so that was the first thing I did when I woke up this morning. I now have my own fork of the JSNES project where I added support for the RetroPort controllers as well as loading local ROMs from disk. As I admit in my README.md, there are already outstanding pull requests for these types of things, but I wanted to have the fun of doing it myself (and an excuse to poke around the JSNES code).

Finally, the one outstanding feature I hoped to add was loading ROMs from Dropbox or GDrive using pure JavaScript. Neither product appears to have a simple JavaScript API that will give you access to file data like the W3C File API does. Perhaps I'll host my fork of JSNES if I can ever add such a feature...

P.S. I should admit that one does not need a Pixel to do these types of things. However, having a new piece of hardware and APIs that have been around long enough that you expect them to be stable is certainly a motivating factor. It's nice to have a project that doesn't involve any yak-shaving, such as figuring out how to install a version of Chrome from the Beta channel!

Wednesday, January 2, 2013

Generating Google Closure JavaScript from TypeScript

Over a year ago, I created a prototype for generating Google Closure JavaScript from CoffeeScript. Today, I am releasing a new, equivalent prototype that uses TypeScript as the source language.

<shamelessplug>I will be speaking at mloc.js next month in Budapest, Hungary, which is a conference on scaling JavaScript development. There, I will discuss various tools to help keep large codebases in check, so if you are interested in hearing more about this work, come join me in eastern Europe in February!</shamelessplug>

I have been interested in the challenge of Web programming in the large for quite some time. I believe that Google Closure is currently still the best option for large-scale JavaScript/Web development, but that it will ultimately be replaced by something that is less verbose. Although Dart shows considerable promise, I am still dismayed by the size of the JavaScript that it generates. By comparision, if TypeScript can be directly translated to JavaScript that can be compiled using the advanced mode of the Closure Compiler, then we can have all the benefits of optimized JavaScript from Closure without the verbosity. What's more, because TypeScript is a superset of JavaScript, I believe that its syntax extensions have a chance of making it into the ECMAScript standard at some point, whereas the chances of Dart being supported natively in all major browsers is pretty low.

In terms of hacking on TypeScript itself, getting started with the codebase was a pain because I develop on Linux, and presumably TypeScript's developers do not. Apparently Microsoft does not care that cloning a CodePlex repository using Git on Linux does not work. Therefore, in order to get the TypeScript source code, I had to switch to a Mac, clone the repo there, and then import it into GitHub so I could clone it from my Linux box.

Once I got the code on my machine, the next step was figuring out how to build it. TypeScript includes a Makefile, but it contains batch commands rather than Bash commands, so of course those did not work on Linux, either. Modifying the Makefile was a bit of work, and unfortunately will likely be a pain to reconcile with future upstream changes. It would be extremely helpful if Microsoft switched to a cross-platform build tool (or maintained two versions of the Makefile themselves).

Once I was able to build, I wanted to start hacking and exploring the TypeScript codebase. I suspected that reading the .ts files as plaintext without any syntax highlighting would be painful, so I Googled to see if there were any good alternatives to Visual Studio with TypeScript support that would run on Linux. Fortunately, JetBrains recently released a beta of PhpStorm and WebStorm 6.0 that includes support for TypeScript, so I decided to give it a try. Given that both TypeScript and support for it in WebStorm are very new, I am quite impressed with how well it works today. Syntax highlighting and ctrl+clicking to definitions work as expected, though PhpStorm reports many false positives in terms of TypeScript errors. I am optimistic that this will get better over time.

Now that I have all of the setup out of the way, the TypeScript codebase is pretty nice to work with. The source code does not contain much in the way of comments, but the code is well-structured and the class and variable names are well-chosen, so it is not too hard to find your way. I would certainly like to upstream some of my changes and contribute other fixes to TypeScript, though if that Git bug on CodePlex doesn't get fixed, it is unlikely that I will become an active contributor.

Want to learn more about Closure? Pick up a copy of my book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps!

Wednesday, April 25, 2012

New Essay: Caret Navigation in Web Applications

For those of you who don't follow me on Twitter, yesterday I announced a new essay: Caret Navigation in Web Applications. There I talk about some of the things that I worked hard to get right when implementing the UI for Google Tasks.

This essay may not be for JavaScript lightweights, though it didn't seem to gain much traction on Hacker News or on Reddit (which is particularly sad because Reddit readers were much of the reason that I took the time to write it in the first place). I suppose it would have been more popular if I had made it a lot shorter, but I believe it would have been much less useful if I did. That's just my style, I guess: Closure: The Definitive Guide was originally projected to be 250-300 pages and it ended up at 550.

I also wanted to take this opportunity to make some more comparisons about Google Tasks vs. Asana that weren't really appropriate for the essay:

  • It's annoying that Asana does not support real hierarchy. Asana has responded to this on Quora, but their response is really about making Asana's life easier, not yours. What I think is really interesting is that if they ever decide to reverse their decision, it will really screw with their keyboard shortcuts since they currently heavily leverage Tab as a modifier, but that may not work so well when users expect Tab to indent (and un-indent).
  • It's pretty subtle, but when you check something off of your list in Google Tasks, there is an animated strikethrough. I did this by creating a timing function that redraws the task with the </s> in an increasing position to achieve the effect. This turned out to be tricky to get right because you want the user to be able to see it (independent of task text length) without it feeling slow for long tasks. Since so many people seem to delight in crossing things off their list, this always seemed like a nice touch in Google Tasks. I don't find crossing things off my list in Asana as satisfying.
  • Maybe I haven't given it enough time yet, but I have a lot of trouble with Asana's equivalent of the details pane. The thing seems to slide over when I don't want it to, or I just want to navigate through my tasks and have the details pane update and it starts bouncing on me...I don't know. Am I alone? In Tasks, it's shift-enter to get in and shift-enter to get out. It's not as sexy since there's no animation, but it's fast, it works, and it doesn't get in my way.

Finally, some friends thought I should include a section in the essay about "what browser vendors can do to help" since clearly the browser does not have the right APIs to make things like Google Tasks easy to implement. As you may imagine, I was getting pretty tired of writing the essay, so I didn't include it, but this is still an important topic. At a minimum, an API to return the current (x, y) position of the cursor as well as one to place it at an (x, y) position would help, though even that is ambiguous since a cursor is generally a line, not a point. It would be interesting to see how other UI toolkits handle this.

Oh, and I should mention that if I were to reimplement Google Tasks again today, I would not implement it the way that I did, if I had enough resources. If you look at Google Docs, you will see that your cursor is not a real cursor, but a blinking <div>. That means that Docs is taking responsibility for turning all of your mouse and keyboard input into formatted text. It eliminates all of the funny stuff with contentEditable by basically rebuilding...well, everything related to text editing in the browser. If I had libraries like that to work with (that already have support for pushing changes down to multiple editors in real time), then I would leverage those instead. Of course, if you don't have code like that available, then that is a pretty serious investment to make. Engineering is all about picking the right tradeoffs!

Want to learn more the technologies used to build Google Tasks? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly).