tag:blogger.com,1999:blog-22462599354831011492024-03-19T05:29:19.622-04:00Bolinfest ChangeblogWhen bolinfest.com changes, you'll be the first to know.Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.comBlogger109125tag:blogger.com,1999:blog-2246259935483101149.post-7167233054413530652024-02-26T19:23:00.000-05:002024-02-26T19:23:34.988-05:00Meta Badge Post<span id="docs-internal-guid-a3f21b5b-7fff-69e0-a48b-996c37c823dc"><p dir="ltr" style="background-color: white; line-height: 1.656; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">Feb 16, 2024 marked the end of my 11.75 year career at Meta. As per tradition, I made an internal post to say good-bye accompanied by a photo of my badge. (Also pictured is my old <a href="https://www.brik.co/collections/shop-brik/products/brik-book-macbook-case">Brik Book laptop cover</a> with my custom Buck design, which I got <i>way</i> more mileage out of than I expected to, as Apple released some lousy MBPs for a stretch, so I ended up using my 2015 MBP with this cover for many years!)</span></p><p dir="ltr" style="background-color: white; line-height: 1.656; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span></p><p dir="ltr" style="background-color: white; line-height: 1.656; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">What follows is a slightly modified version of my original post. See if you can solve the mystery embedded within.</span></p><p dir="ltr" style="background-color: white; line-height: 1.656; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span></p><p dir="ltr" style="background-color: white; line-height: 1.656; margin-bottom: 0pt; margin-top: 0pt;"><span></span></p><a name='more'></a></span><p></p><p dir="ltr" style="background-color: white; line-height: 1.656; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span></p><p dir="ltr" style="background-color: white; line-height: 1.656; margin-bottom: 0pt; margin-top: 0pt;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVFPeTfegvHzLqXETcCsD4r8ZVjf6xHnPTZjB30-KAwod5AeLPzs3Oif8KeuYTYJF3x3MiTbqjILmn7sGV992ov6jHbwGwBc3p3l5HGpwU4LV08oMu2WlbZc31m5xG_yjKzFCAd2-mgrz59xT0oYI3R8Di9I3K3Z5zvYVmE3tAKqT6HsyjQJAEQD-WQ95G/s5712/IMG_0553.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="4284" data-original-width="5712" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVFPeTfegvHzLqXETcCsD4r8ZVjf6xHnPTZjB30-KAwod5AeLPzs3Oif8KeuYTYJF3x3MiTbqjILmn7sGV992ov6jHbwGwBc3p3l5HGpwU4LV08oMu2WlbZc31m5xG_yjKzFCAd2-mgrz59xT0oYI3R8Di9I3K3Z5zvYVmE3tAKqT6HsyjQJAEQD-WQ95G/s320/IMG_0553.jpg" width="320" /></a></div><br /><p></p><p dir="ltr" style="background-color: white; line-height: 1.656; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">Based on some of my historical Workplace posts, you might expect me to crank out 2000+ words to expound on everything that has happened over the past 11.75 years, but uncharacteristically,</span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /><br /></span></p><p dir="ltr" style="background-color: white; line-height: 1.656; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">i don’t quite have it in me. I would rather land some more diffs in fbsource with the</span></p><p dir="ltr" style="background-color: white; line-height: 1.656; margin-bottom: 0pt; margin-top: 0pt;"><br /></p><p dir="ltr" style="background-color: white; line-height: 1.656; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">time I have left [like an intern on their last day]. One of my favorite Facebook-isms has always been, “This journey is 1% finished,” and even though I’m excited with everything we achieved this past year across</span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">Dot Slash, CodeCompose, and of course, <b><<<REDACTED>>></b>, I</span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">lament I have a tendency to</span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">yammer over the unfinished 99% rather than step back and celebrate the 1</span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /><br /></span></p><p dir="ltr" style="background-color: white; line-height: 1.656; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">/ 100 that has been a success. But today more than ever, it’s important that I actually take a pause to appreciate my good fortune, in that</span></p><br /><p dir="ltr" style="background-color: white; line-height: 1.656; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">my experience here at Meta has been exceptional: the culture we have cultivated here encouraged me to learn, grow, and push myself far</span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">beyond what I thought I was capable of when I walked into Bootcamp in May 2012. While</span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">of course the company has changed in many ways over the past decade, we still have</span></p><p dir="ltr" style="background-color: white; line-height: 1.656; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">lots of unsolved problems that are unique, interesting, and </span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-style: italic; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">impactful</span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"> because of our scale. In turn,
</span><span style="background-color: transparent; font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">
if you’re curious, play well with others, and bias for action, you can have a long and fruitful career here. While there is</span></p><span style="font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">no way I can possibly thank everyone, past and present, who has helped me</span><span style="font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">on my journey here, I do want to give a shoutout to all who have helped build the DevInfra organization, as it has been the</span><span style="font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">underpinning of my FB experience, and I can’t say enough great things about the people and projects that have made it such a wonderful sandbox for me to play in these past 11+ years. Finally, while good-byes are sad, I remind myself that</span><span style="font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span><span style="font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;">the tech industry is not actually all that big, so I expect many of us will cross paths again down the road. Until then!</span><div><span><span style="font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span></span></div><div><span><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVsMbvqkN4EiBVgcScJnr-5Myl5_UXl944SvwtYhoNLF-i4ttdUqHjT75rMjvqm5sjStzQD_8y-rwmHFw_1EZvYI1KjkN5bgC2a_dMByzi7P12ioi586gRZGQ764Wc-kV-SAS5xClYTTylF-RSa5eBEmqh4THgB48E-lBvZVa5gA93ESqIDnrXJChGw0jV/s4032/IMG_0516.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="4032" data-original-width="3024" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVsMbvqkN4EiBVgcScJnr-5Myl5_UXl944SvwtYhoNLF-i4ttdUqHjT75rMjvqm5sjStzQD_8y-rwmHFw_1EZvYI1KjkN5bgC2a_dMByzi7P12ioi586gRZGQ764Wc-kV-SAS5xClYTTylF-RSa5eBEmqh4THgB48E-lBvZVa5gA93ESqIDnrXJChGw0jV/s320/IMG_0516.jpg" width="240" /></a></div><br /><span style="font-family: Roboto, sans-serif; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><br /></span></span></div>Bolinfeedhttp://www.blogger.com/profile/16203722926608871325noreply@blogger.com0tag:blogger.com,1999:blog-2246259935483101149.post-51561112552554382652018-09-29T14:35:00.000-04:002018-09-29T14:35:03.464-04:00How I ended up writing opensnoop in pure C using eBPFHello friends, it's been awhile. I recently had the opportunity to do a deep dive on eBPF and I learned a lot in the process. There isn't a lot out there on the subject, so I decided to put together a long-form article about the experience: <a href="https://bolinfest.github.io/opensnoop-native/">https://bolinfest.github.io/opensnoop-native/</a>.
<p>
You'll notice that this is hosted on GitHub Pages rather than on bolinfest.com itself. I expect there to be typos to fix and cross-references to add over time, so it seemed easiest to colocate the article with <a href="https://github.com/bolinfest/opensnoop-native">the code</a> and have both in version control so I can track changes properly. Also, it's nice to be able to compose things in Markdown (or in <a href="https://quip.com/">Quip</a> and then export to Markdown) and then let a GitHub Pages template do the rest, particularly when it comes to syntax highlighting the code samples.Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com58tag:blogger.com,1999:blog-2246259935483101149.post-24887197206599831912017-03-20T04:31:00.000-04:002017-03-20T04:46:05.151-04:00JavaScript vs. Python in 2017<i>I may be one of the last people you would expect to write an essay criticizing JavaScript, but here we are.</i><br />
<br />
Two of my primary areas of professional interest are <a href="https://www.youtube.com/watch?v=M3uWx-fhjUc">JavaScript and “programming in the large.”</a> I gave a <a href="https://www.youtube.com/watch?v=PCL3dXQZ_Wk">presentation back in 2013 at mloc.js</a> where I argued that static typing is an essential feature when picking a language for a large software project. For this reason, <a href="http://blog.bolinfest.com/2017/03/python-airing-of-grievances.html">among others</a>, I historically limited my use of Python to small projects with no more than a handful of files.<br />
<br />
Recently, I needed to build a command-line tool for work that could speak Thrift. I have been enjoying the <a href="https://nuclide.io/">Nuclide</a>/<a href="https://flowtype.org/">Flow</a>/<a href="https://babeljs.io/">Babel</a>/<a href="http://eslint.org/">ESLint</a> toolchain a lot recently, so my first instinct was to use JavaScript for this new project, as well. However, it quickly became clear that if I went that route, <a href="https://github.com/apache/thrift/pull/1175">I would have to spend a lot of time up front on getting the Thrift bindings to work properly</a>. I couldn't convince myself that my personal preference for JavaScript would justify such an investment, so I decided to take a look at Python.<br />
<br />
I was vaguely aware that there was an effort to add support for static typing in Python, so I Googled to find out what the state of the art was. It turns out that it is a tool named <a href="http://mypy.readthedocs.io/en/latest/">Mypy</a>, and it provides <a href="https://en.wikipedia.org/wiki/Gradual_typing">gradual typing</a>, much like <a href="https://flowtype.org/">Flow</a> does for JavaScript or <a href="http://hacklang.org/">Hack</a> does for PHP. Fortunately, Mypy is more like Hack+HHVM than it is like Flow in that a Python 3 runtime accepts the type annotations natively whereas Flow type annotations must be stripped by a separate tool before passing the code to a JavaScript runtime. (To use Mypy in Python 2, you have to put your type annotations in comments, <a href="https://github.com/google/closure-compiler/wiki/Annotating-JavaScript-for-the-Closure-Compiler">operating in the style of the Google Closure toolchain</a>.) Although Mypy does not appear to be as mature as Flow (support for incremental type checking is still in the works, for example), simply being able to succinctly document type information was enough to renew my interest in Python.<br />
<br />
In researching how to use Thrift from Python, a Google search turned up some sample <a href="https://github.com/facebook/fbthrift/tree/master/thrift/tutorial/py.asyncio">Python code that spoke to Thrift using asynchronous abstractions</a>. After gradual typing, <code>async</code>/<code>await</code> is the other feature in JavaScript that I cannot live without, so this code sample caught my attention! As we recently added support for building projects in Python 3.6 at work, it was trivial for me to get up and running with the latest and greatest features in Python. (Incidentally, I also learned that you really want Python 3.6, not 3.5, as 3.6 has some important <a href="http://mypy.readthedocs.io/en/latest/python36.html">improvements to Mypy</a>, <a href="https://bugs.python.org/issue28613">fixes to the <code>asyncio</code> API</a>, <a href="https://www.python.org/dev/peps/pep-0498/">literal string interpolation like you have in ES6</a>, <a href="https://docs.python.org/3/whatsnew/3.6.html">and more</a>!)<br />
<br />
Coming from <a href="https://hackernoon.com/how-it-feels-to-learn-javascript-in-2016-d3a717dd577f">the era of “modern” JavaScript</a>, one thing that was particularly refreshing was rediscovering how Python is an edit/refresh language out of the box whereas JavaScript used to be that way, but is no more. Let me explain what I mean by that:<br />
<br />
<ul>
<li>In Python 3.6, I can create a new <code>example.py</code> file in my text editor, write Python code that uses <code>async</code>/<code>await</code> and type annotations, switch to my terminal, and run <code>python example.py</code> to execute my code.</li>
<li>In JavaScript, I can create a new <code>example.js</code> file in my text editor, write JavaScript code that uses <code>async</code>/<code>await</code> and type annotations, switch to my terminal, run <code>node example.js</code>, see it fail because it does not understand my type annotations, run <code>npm install -g babel-node</code>, run <code>babel-node example.js</code>, see it fail again because I don't have a <code>.babelrc</code> that declares the <code>babel-plugin-transform-flow-strip-types</code> plugin, rummage around on my machine and find a <code>.babelrc</code> I used on another project, copy that <code>.babelrc</code> to my current directory, run <code>babel-node example.js</code> again, watch it fail because it doesn't know where to find <code>babel-plugin-transform-flow-strip-types</code>, go back to the directory from which I took the <code>.babelrc</code> file and now copy its <code>package.json</code> file as well, remove the junk from <code>package.json</code> that <code>example.js</code> doesn't need, run <code>npm install</code>, get impatient, kill <code>npm install</code>, run <code>yarn install</code>, and run <code>babel-node example.js</code> to execute my code. For bonus points, <code>babel-node example.js</code> runs considerably slower than <code>node example.js</code> (with the type annotations stripped) because it re-transpiles <code>example.js</code> every time I run it.</li>
</ul>
Indeed, the JavaScript ecosystem offers all sorts of tools to speed up this process using daemons and caching, but you or someone on your team has to invest quite a bit of time researching, assembling, and maintaining a JavaScript toolchain for your project before anyone can write a line of “modern” JavaScript. Although you may ultimately achieve a nice edit/refresh workflow, you certainly will not have one out of the box as you would in Python.<br />
<br />
<br />
<blockquote>
“JavaScript is no longer an edit/refresh language.”</blockquote>
<br />
Another refreshing difference between JavaScript and Python is the “batteries included” nature of Python. If you look at the standard library that comes with JavaScript, it is fairly minimal. The Node environment does a modest job of augmenting what is provided by the standard library, but the majority of the functionality you need inevitably has to be fetched from npm. Specifically, consider the following functionality that is included in Python's standard library, but must be fetched from npm for a Node project:<br />
<br />
<ul>
<li><a href="https://www.npmjs.com/package/rimraf">deleting a directory</a></li>
<li>argument parsing (<a href="https://www.npmjs.com/package/commander">I</a> <a href="https://www.npmjs.com/package/yargs">can't</a> <a href="https://www.npmjs.com/package/argparse">even...</a>)</li>
<li><a href="https://www.npmjs.com/package/temp">creating a temp file</a></li>
<li>Logging (there are too many to options for me to decide which ones to link to)</li>
<li>unit testing (again, <a href="https://www.npmjs.com/package/mocha">too</a> <a href="https://www.npmjs.com/package/chai">many</a> <a href="https://www.npmjs.com/package/jasmine">to</a> <a href="https://www.npmjs.com/package/jest">list</a>)</li>
<li><a href="https://www.npmjs.com/package/sprintf-js">printf format strings</a></li>
<li><a href="https://www.npmjs.com/package/left-pad">left-justify a string</a> (<a href="https://www.theregister.co.uk/2016/03/23/npm_left_pad_chaos/">Node developers are still having nightmares over this one</a>)</li>
</ul>
As you can see, for many of these features, there are multiple third-party libraries that provide overlapping functionality. (For example, if you were looking for a JSON parser, would you choose <code>parse-json</code>, <code>safe-json-parse</code>, <code>fast-json-parse</code>, <code>jsonparse</code>, or <code>json-parser</code>?) To make matters worse, npm module names are doled out on a first-come, first-serve basis. Much like domain names, this means that great names often go to undeserving projects. (For example, judging from its download count, <a href="https://www.npmjs.com/package/logging">the npm module named <code>logging</code></a> makes it one of the least popular logging packages for JavaScript.) This makes the comparison of third-party modules all the more time-consuming since the quality of the name is not a useful signal for the quality of the library.<br />
<br />
It might be possible that Python's third-party ecosystem is just as bad as npm's. What is impressive is that <i>I have no idea whether that is the case because it is so rare that I have to look to a third-party Python package to get the functionality that I need</i>. I am aware that data scientists rely heavily on third-party packages like NumPy, but unlike the Node ecosystem, there is <i>one</i> NumPy package that everyone uses rather than a litany of competitors named <code>numpy-fast</code>, <code>numpy-plus</code>, <code>simple-numpy</code>, etc.<br />
<br />
<br />
<blockquote>
“We should stop holding up npm as a testament to the diversity of the JavaScript ecosystem, but instead cite it as a failure of JavaScript's standard library.”</blockquote>
<br />
For me, one of the great ironies in all this is that, arguably, the presence of a strong standard library in JavaScript would be the most highly leveraged when compared to other programming languages. Why? Because today, every non-trivial web site requires you to download underscore.js or whatever its authors have chosen to use to compensate for JavaScript's weak core. When you consider the aggregate adverse impact this has on network traffic and page load times, the numbers are frightening.<br />
<br />
<h2>
So...Are You Saying JavaScript is Dead?</h2>
No, no I am not. If you are building UI using web technologies (which is a lot of developers), then I still think that JavaScript is your best bet. Modulo the emergence of <a href="http://webassembly.org/">Web Assembly</a> (which is worth paying attention to), JavaScript is still the only language that runs natively in the browser. There have been many attempts to take an existing programming language and compile it to JavaScript to avoid “having to” write JavaScript. There are cases where the results were good, but they never seemed to be great. Maybe some transpilation toolchain will ultimately succeed in unseating JavaScript as the language to write in for the web, but I suspect we'll still have the majority of web developers writing JavaScript for quite some time.<br />
<br />
Additionally, the browser is not the only place where developers are building UI using web technologies. Two other prominent examples are <a href="https://electron.atom.io/">Electron</a> and <a href="https://facebook.github.io/react-native/">React Native</a>. Electron is attractive because it lets you write once for Windows, Mac, and Linux while React Native is attractive because it lets you write once for Android and iOS. Both are also empowering because the edit/refresh cycles using those tools is much faster than their native equivalents. From a hiring perspective, it seems like developers who know web technologies (1) are available in greater numbers than native developers, and (2) can support more platforms with smaller teams compared to native developers.<br />
<br />
Indeed, I could envision ways in which these platforms could be modified to support Python as the scripting language instead of JavaScript, which could change the calculus. However, one thing that all of the crazy tooling that exists in the JavaScript community has given way to is transpiler toolchains like Babel, which make it easier for ordinary developers (who do not have to be compiler hackers) to experiment with new JavaScript syntax. In particular, this tooling has paved the way for things like <a href="https://facebook.github.io/react/docs/jsx-in-depth.html">JSX</a>, which I contend is one of the key features that makes React such an enjoyable technology for building UI. (Note that you can use React without JSX, but it is tedious.)<br />
<br />
To the best of my knowledge, the Python community does not have an equivalent, popular mechanism for experimenting with DSLs within Python. So although it might be straightforward to add Python bindings in these existing environments, I do not think that would be sufficient to get product developers to switch to Python unless changes were also made to the language that made it as easy to express UI code in Python as it is in JavaScript+JSX today.<br />
<br />
<h2>
Key Takeaways</h2>
Python 3.6 has built-in support for gradual typing and async/await. Unlike JavaScript, this means that you can write Python code that uses these language features and run that file directly without any additional tooling. Further, its rich standard library means that you have to spend little time fishing around and evaluating third-party libraries to fill in missing gaps. It is very much a “get stuff done” server-side scripting language, requiring far less scaffolding than JavaScript to get a project off the ground. Although Mypy may not be as featureful or performant as Flow or TypeScript is today, the velocity of that project is certainly something that I am going to start paying attention to.<br />
<br />
By comparison, I expect that JavaScript will remain strong among product developers, but those who use Node today for server-side code or command-line tools would probably be better off using Python. If the Node community wants to resist this change, then I think they would benefit from (1) expanding the Node API to make it more comprehensive, and (2) reducing the startup time for Node. It would be even better if they could modify their runtime to recognize things like type annotations and JSX natively, though that would require changes to V8 (or Chakra, on Windows), which I expect would be difficult to maintain and/or upstream. Getting TC39 to standardize either of those language features (which would force Google/Microsoft's hand to add native support in their JS runtimes) also seems like a tough sell.<br />
<br />
Overall, I am excited to see how things play out in both of these communities. You never know when someone will release a new technology that obsoletes your entire toolchain overnight. For all I know, we might wake up tomorrow and all decide that <a href="http://facebook.github.io/reason/">we should be writing OCaml</a>. Better set your alarm clock.
<p>
(This post is also <a href="https://medium.com/@mbolin/javascript-vs-python-in-2017-d31efbb641b4">available on Medium</a>.)Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com81tag:blogger.com,1999:blog-2246259935483101149.post-92050689521817772372017-03-19T23:56:00.000-04:002017-03-20T04:48:11.944-04:00Python: Airing of Grievances<em>Although this list focuses on the negative aspects of Python, I am publishing it so I can use it as a reference in an upcoming essay about my </em>positive<em> experiences with Python 3.6. <strong>Update:</strong> This is referenced by my post, <a href="http://blog.bolinfest.com/2017/03/javascript-vs-python-in-2017.html">"JavaScript vs. Python in 2017."</a></em>
<p>
I am excited by many of the recent improvements in Python, but here are some of my outstanding issues with Python (particularly when compared to JavaScript):
<h2>PEP-8</h2>
What do you get when you combine 79-column lines, four space indents, and snake case? Incessant line-wrapping, that's what! I prefer 100 cols and 2-space indents for my personal Python projects, but every time a personal project becomes a group project and true Pythonista joins the team, they inevitably turn on PEP-8 linting with its defaults and reformat everything.
<h2>Having to declare <code>self</code> as the first argument of a method</h2>
As someone who is not a maintainer of Python, there is not much I could do to fix this, but I did the next best thing: I complained about it on Facebook. In this case, it worked! That is, I provoked <a href="https://github.com/ambv">Łukasz Langa</a> to add a check for this (it now exists as <a href="https://github.com/PyCQA/flake8-bugbear/commit/ff6afda2bffc376a0a0b33e6181324367483598c">B902</a> in <a href="https://github.com/PyCQA/flake8-bugbear">flake8-bugbear</a>), so at least when I inevitably forget to declare <code>self</code>, <a href="https://nuclide.io/docs/languages/python/#code-diagnostics">Nuclide</a> warns me inline as I'm authoring the code.
<h2><code>.pyc</code> files</h2>
These things are turds. I get why they are important for production, but I'm tired of seeing them appear in my filetree, adding <code>*.pyc</code> to <code>.gitignore</code> every time I create a new project, etc.
<h2>Lack of a standard docblock</h2>
Historically, Python has not had a formal type system, so I was eager to document my parameter and return types consistently. If you look at the top-voted answer to “What is the standard Python docstring format?” on StackOverflow, <a href="http://stackoverflow.com/a/24385103/396304">the lack of consensus extremely dissatisfying</a>. Although Javadoc has its flaws, <a href="http://www.oracle.com/technetwork/articles/java/index-137868.html">Java's documentation story</a> is orders of magnitude better than that of Python's.
<h2>Lack of a Good, free graphical debugger</h2>
Under duress, I use <a href="https://docs.python.org/2/library/pdb.html">pdb</a>. I frequently edit Python on a remote machine, so most of the free tools do not work for me. I have not had the energy to try to set up <a href="https://www.jetbrains.com/help/pycharm/2016.3/remote-debugging.html">remote debugging in PyCharm</a>, though admittedly that appears to be the best commercial solution. Instead, <a href="https://github.com/facebook/nuclide/commit/694d3c504a32f788ca28f4c044f8d65d6cdcbf67">I tried to lay the groundwork for a Python debugger in Nuclide</a>, which would support remote development by default. I need to find someone who wants to continue that effort.
<h2>Whitespace-delimited</h2>
I'm still not 100% sold on this. In practice, this begets all sorts of subtle annoyances. Code snippets that you copy/paste from the Web or an email have to be reformatted before you can run them. Tools that generate Python code incur additional complexity because they have to keep track of the indent level. Editors frequently guess the wrong place to put your cursor when you introduce a new line whereas typing <code>}</code> gives it the signal it needs to un-indent without further interaction. Expressions that span a line frequently have to be wrapped in parentheses in order to parse. The list goes on and on.
<h2>Bungled rollout of Python 3</h2>
Honestly, this is one of the primary reasons I didn't look at Python seriously for awhile. When Python 3 came out, the rhetoric I remember was: “Nothing substantially new here. Python 2.7 works great for me and the <a href="https://docs.python.org/2/library/2to3.html">2to3 migration tool</a> is reportedly buggy. No thanks.” It's rare to see such a large community move forward with such a poorly executed backwards-compatibility story. (Presumably this has contributed to the lack of a default Python 3 installation on OS X, which is another reason that some software shops stick with Python 2.) <a href="http://www.i-programmer.info/news/167-javascript/7932-angularjs-20-is-radically-different-.html">Angular 2</a> is the only other similar rage-inducing upgrade in recent history that comes to mind.
<h2>I don't understand how imports are resolved</h2>
Arguably, this is a personal failure rather than a failure of Python. That said, compared to Java or Node, whatever Python is doing seems incredibly complicated. Do I need an <code>__init__.py</code> file? Do I need to put something in it? If I do have to put something in it, does it have to define <code>__all__</code>? For a language whose mantra is “There's only one way to do it,” it is frustrating that the answer to all of these questions is “it depends.”
<h2>Busted Scoping Rules</h2>
For a language with built-in <code>map()</code> and <code>filter()</code> functions that underscore the support for first-order functions, I am perplexed with how bad the support for closures is in Python. Apparently your options are (1) <code>lambda</code> (which is extremely limited), or (2) a bastardized closure. Borrowing an example from <a href="https://www.smallsurething.com/a-quick-guide-to-nonlocal-in-python-3/">an article that explains the new <code>nonlocal</code> keyword in Python 3</a>, consider the following code in JavaScript:
<pre class="prettyprint lang-js">
function outside() {
let msg = 'Outside!';
console.log(msg);
let inside = () => {
msg = 'Inside!';
console.log(msg);
};
inside();
console.log(msg);
}
</pre>
If you ran <code>outside()</code>, you would get the following output (which you would expect from a lexically scoped language like JavaScript):
<pre>
Outside!
Inside!
Inside!
</pre>
Whereas if you wrote this Python code:
<pre class="prettyprint lang-py">
def outside():
msg = 'Outside!'
print(msg)
def inside():
msg = 'Inside!'
print(msg)
inside()
print(msg)
</pre>
and ran <code>outside()</code>, you would get the following:
<pre>
Outside!
Inside!
Outside!
</pre>
In Python 2, <a href="http://stackoverflow.com/questions/141642/what-limitations-have-closures-in-python-compared-to-language-x-closures">the only way to emulate the JavaScript behavior is to use a mutable container type</a> (which is gross):
<pre class="prettyprint lang-py">
def outside():
msg_holder = ['Outside!']
print(msg_holder[0])
def inside():
msg_holder[0] = 'Inside!'
print(msg_holder[0])
inside()
print(msg_holder[0])
</pre>
Though apparently Python 3 has a new <code>nonlocal</code> keyword that makes this less distasteful:
<pre class="prettyprint lang-py">
def outside():
msg = 'Outside!'
print(msg)
def inside():
nonlocal msg
msg = 'Inside!'
print(msg)
inside()
print(msg)
</pre>
I realize that JavaScript generally forces you to type more with its <code>var</code>, <code>let</code> and <code>const</code> qualifiers, but I find this to be a small price to pay in exchange for unambiguous variable scopes.
Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com118tag:blogger.com,1999:blog-2246259935483101149.post-77952450987871905672015-10-18T17:12:00.000-04:002015-10-18T17:12:21.038-04:00Hacking on Atom Part II: Building a Development ProcessOver one year ago, I wrote a blog post: <a href="http://blog.bolinfest.com/2014/08/hacking-on-atom-part-i-coffeescript.html">“Hacking on Atom Part I: CoffeeScript”</a>. This post (Part II) is long overdue, but I kept putting it off because I continued to learn and improve the way I developed for Atom such that it was hard to reach a “stopping point” where I felt that the best practices I would write up this week wouldn't end up becoming obsolete by something we discovered next week.<br/>
<br/>
Honestly, I don't feel like we're anywhere near a stopping point, but I still think it's important to share some of the things we have learned thus far. (In fact, I <i>know</i> we still have some exciting improvements in the pipeline for our developer process, but those are gated by the Babel 6.0 release, so I'll save those for another post once we've built them out.)<br/>
<br/>
I should also clarify that when I say “we,” I am referring to the Nuclide team, of which I am the tech lead. <a href="http://nuclide.io/">Nuclide</a> is a a collection of packages for <a href="https://atom.io/">Atom</a> to provide IDE-like functionality for a variety of programming languages and technologies. The <a href="https://github.com/facebook/nuclide">code is available on GitHub</a> (and we accept contributions!), but it is primarily developed by my team at Facebook. At the time of this writing, Nuclide is composed of 40 Atom packages, so we have quite a bit of Atom development experience under our belts.<br/>
<h2 id='EZcACAqfHYY'>My Transpiler Quest</h2>
When I spun up the Nuclide project, I knew that we were going to produce a lot of JavaScript code. Transpilers such as Traceur were not very mature yet, but I strongly believed that ES6 was the future and I didn't want to start writing Nuclide in ES5 and have to port everything to ES6 later. On a high level, I was primarily concerned about leveraging the following language extensions:<br/>
<div data-section-style='5'><ul id='EZcACASqMju'><li id='EZcACAlQPgu' class='' value='1'>Standardized class syntax
<br/></li><li id='EZcACAS2C8l' class=''>async/await
<br/></li><li id='EZcACAKN8Xz' class=''>JSX (for React)
<br/></li><li id='EZcACAQ5GqY' class=''>type annotations
<br/></li></ul></div>Note that of those four things, only the first one is specified by ES6. async/await <a href="https://tc39.github.io/ecmascript-asyncawait/">appears to be on track for ES7</a>, though I don't know if the TC39 will ever be able to agree on a standard for type annotations or let JSX in, but we'll see.<br/>
<br/>
Because of my diverse set of requirements, it was hard to find a transpiler that could provide all of these things:<br/>
<div data-section-style='5'><ul id='EZcACAYo8Ix'><li id='EZcACA4DATy' class='' value='1'><a href="https://github.com/google/traceur-compiler">Traceur</a> provided class syntax and other ES6 features, but more experimental things like async/await were very buggy.
<br/></li><li id='EZcACAIoyi8' class=''><a href="https://github.com/microsoft/typescript">TypeScript</a> provided class syntax and type annotations, but I knew that <a href="http://flowtype.org/">Flow</a> was in the works at Facebook, so ultimately, that is what we would use for type annotations.
<br/></li><li id='EZcACAWFhvS' class=''>React came with <a href="https://github.com/facebook/jstransform">jstransform</a>, which supported JSX and a number of ES6 features.
<br/></li><li id='EZcACAkkXDH' class=''><a href="https://github.com/benjamn/recast">recast</a> provided a general JavaScript AST transformation pipeline. Most notably, the <a href="https://github.com/facebook/regenerator">regenerator</a> transform to provide support for yield and async/await was built on recast.
<br/></li></ul></div>Given my constraints and the available tools, it seemed like investing in recast by adding more transforms for the other features we wanted seemed like the most promising way to go. In fact, someone had already been working on such an endeavor internally at Facebook, but the performance was so far behind that of jstransform that it was hard to justify the switch.<br/>
<br/>
For awhile, I tried doing crude things with regular expressions to hack up our source so that we could use regenerator and jstransform. The fundamental problem was that <i>transpilers do not compose</i> because if they both recognize language features that cause parse errors in the other, then you cannot use them together. Once we started adding early versions of Flow into the mix to get type checking (and Flow's parser recognized even less of ES6 than jstransform did), the problem became even worse. For a long time, in an individual file, we could have async/await or type checking, but not both.<br/>
<br/>
To make matters worse, we also had to run a file-watcher service that would write the transpiled version of the code someplace where Atom could load it. We tried using a combination of <a href="https://www.npmjs.com/package/gulp">gulp</a> and other things, but all too often a change to a file would go unnoticed, the version on disk would not get transpiled correctly, and we would then have a subtle bug (this seemed to happen most often when interactive rebasing in Git).<br/>
<br/>
It started to become questionable whether putting up with the JavaScript of the future was worth it. Fortunately, it was around this time that <a href="https://babeljs.io/">Babel</a> (<a href="http://babeljs.io/blog/2015/02/15/not-born-to-die/">née 6to5</a>) came on the scene. It did everything that we needed and more. I happily jettisoned the jstransform/regenerator hack I had cobbled together. Our team was so much happier and more productive, but there were still two looming issues:<br/>
<div data-section-style='5'><ul id='EZcACAuHJ3f'><li id='EZcACAuQsk2' class='' value='1'>Babel accepted a much greater input language than Flow.
<br/></li><li id='EZcACAdd9cE' class=''>It did not eliminate the need for a file-watching service to transpile on the fly for Atom.
<br/></li></ul></div>Rather than continue to hack around the problems we were having, we engaged with the Flow and Atom teams directly. For the Flow team, <a href="https://github.com/facebook/flow/issues/560">supporting all of ES6 has been an ongoing issue</a>, but we actively lobbied to prioritize to support features that were most important to us (and caused unrecoverable parse errors if they were not addressed), such as support for async/await and following symlinks through require/import statements.<br/>
<br/>
To eliminate our gulp/file-watching contraption, <a href="http://blog.atom.io/2015/02/04/built-in-6to5.html">we upstreamed a pull request to Atom to support Babel natively</a>, just as it already had support for CoffeeScript. Because we didn't want to identify Babel files via a special extension (we wanted the files to be named .js rather than .es6 or something), and because (at least at the time), Babel was too slow to categorically transpile all files with a .js extension, we compromised on using the heuristic “if a file has a .js extension and its contents start with <code>'use babel'</code>, then transpile the file with Babel before evaluating it.” This has worked fairly well for us, but it also meant that everyone had to use the set of Babel options that we hardcoded in Atom. However, once Babel 6.0 comes out, <a href="https://github.com/atom/atom/issues/8416">we plan to work with Atom to let packages specify a .babelrc</a> file so that every package can specify its own Babel options.<br/>
<br/>
It has been a long road, but now that we are in a world where we can use Babel and Flow to develop Atom packages, we are very happy.<br/>
<h2 id='EZcACAj2NLE'>Node vs. Atom Packages</h2>
Atom has its own idea of a package that is very similar to an npm package, but differs in some critical ways:<br/>
<div data-section-style='5'><ul id='EZcACAs91PL'><li id='EZcACAivw3Z' class='' value='1'>Only one version/instance of an Atom package can be loaded globally in the system.
<br/></li><li id='EZcACAkaBmV' class=''>An Atom package cannot declare another Atom package as a dependency. (This somewhat follows from the first bullet point because if two Atom packages declared a dependency on different versions of the same Atom package, it is unclear what the right way to resolve it would be.)
<br/></li><li id='EZcACAsbVD6' class=''>Because Atom packages cannot depend on each other, it is not possible to pull in another one synchronously via <code>require()</code>.
<br/></li><li id='EZcACAbfEht' class=''>Atom packages have special folders such as <code>styles</code>, <code>grammars</code>, <code>snippets</code>, etc. The contents of these folders must adhere to a specific structure, and their corresponding resources are loaded when the Atom package is <i>activated</i>.
<br/></li></ul></div>This architecture is particularly problematic when trying to build reusable UI components. Organizationally, it makes sense to put something like a combobox widget in its own package that can be reused by other packages. However, because it likely comes with its own stylesheet, it must be bundled as an Atom package rather than a Node package.<br/>
<br/>
To work around these issues, we introduced the concept of three types of packages in Nuclide: Node/npm, Node/apm, and Atom/apm. We have <a href="https://github.com/facebook/nuclide/blob/master/pkg/README.md">a README that explains this in detail</a>, but the key insight is that a Node/apm package is a package that is available via npm (and can be loaded via <code>require()</code>), but is structured like an Atom package. We achieved this by introducing a utility, <a href="https://www.npmjs.com/package/nuclide-atom-npm">nuclide-atom-npm</a>, which loads the code from a Node package, but also installs the resources from the <code>styles/</code> and <code>grammars/</code> directories in the package, if present. It also adds a hidden property on <code>global</code> to ensure that the package is loaded only once globally.<br/>
<br/>
Nuclide has many packages that correspond to UI widgets that we use in Atom: <a href="https://www.npmjs.com/package/nuclide-ui-checkbox">nuclide-ui-checkbox</a>, <a href="https://www.npmjs.com/package/nuclide-ui-dropdown">nuclide-ui-dropdown</a>, <a href="https://www.npmjs.com/package/nuclide-ui-panel">nuclide-ui-panel</a>, etc. If you look at the implementation of any of these packages, you will see that they load their code using <code>nuclide-atom-npm</code>. Using this technique, we can reliably require the building blocks of our UI synchronously, which would not be the case if our UI components were published as Atom packages. This is especially important to us because we build our UI in Atom using React.<br/>
<h2 id='EZcACAi8Nbp'>React</h2>
In July 2014, the Atom team had a big blog post that <a href="http://blog.atom.io/2014/07/02/moving-atom-to-react.html">celebrated their move to React</a> for the editor. Months later, they had a very <a href="https://github.com/atom/atom/pull/5624">quiet pull request</a> that removed the use of React in Atom. Basically, the Atom team felt that to achieve the best possible performance for their editor, they needed to hand-tune that code rather than risk the overhead of an abstraction, such as React.<br/>
<br/>
This was unfortunate for Nuclide because one of the design limitations of React is that it expects there to be only one instance of the library installed in the environment. When you own all of the code in a web page, it is not a big deal to commit to using only one version (if anything, it's desirable!), but when you are trying to create an extensible platform like Atom, it presents a problem. Either Atom has to choose the version of React that every third-party package must use, or every third-party package must include its own version of React.<br/>
<br/>
The downside of Atom mandating the version of React is that, at some point, some package authors will want them to update it when a newer version of React comes out whereas other package authors will want them to hold back so their packages don't break. The downside of every third-party package (that wants to use React) including its own version is that multiple instances of React are not guaranteed to work together when used in the same environment. (It also increases the amount of code loaded globally at runtime.)<br/>
<br/>
Further, React and Atom also conflict because they both want to control how events propagate through the system. To that end, when Atom was using React for its editor, it created a <a href="https://www.npmjs.com/package/react-atom-fork">fork of React</a> that did not interfere with Atom's event dispatch. This was based on React v0.11, which quickly became an outdated version of React.<br/>
<br/>
Because we knew that Atom planned to remove their fork of React from their codebase before doing their 1.0 release, we needed to create our own fork that we could use. To that end, Jonas Gebhardt created the <a href="https://www.npmjs.com/package/react-for-atom">react-for-atom</a> package, which is an Atom-compatible fork of React based off of React v0.13. As we did for the <code>nuclide-atom-npm</code> package, we added special logic that ensured that <code>require('react-for-atom')</code> could be called by various packages loaded by Atom, but it cached the result on the global environment so that subsequent calls to <code>require()</code> would return the cached result rather than load it again, making it act as a singleton.<br/>
<br/>
<a href="https://github.com/atom/atom/issues/5756">Atom does not provide any sort of built-in UI library by default.</a> The Atom UI itself does not use a standard library: most of the work is done via raw DOM operations. Although this gives package authors a lot of freedom, it also arrests some via a paradox of choice. On Nuclide, we have been using React very happily and successfully inside of Atom. The combination of Babel/JSX/React has facilitated producing performant and maintainable UI code.<br/>
<h2 id='EZcACA1HrXK'>Monorepo</h2>
Atom is developed as a collection of over 100 packages, each contained in its own repository. From the outside, this seems like an exhausting way to develop a large software project. Many, many commits to Atom are just minor version bumps to dependencies in <code>package.json</code> files across the various repos. To me, this is a lot of unnecessary noise.<br/>
<br/>
Both <a href="https://www.youtube.com/watch?v=X0VH78ye4yY">Facebook</a> and <a href="https://www.youtube.com/watch?v=W71BTkUbdqE">Google</a> have extolled the benefits of a single, monolithic codebase. One of the key advantages over the multi-repo approach is that it makes it easier to develop and land atomic changes that span multiple parts of a project. For this and other reasons, we decided to develop the 100+ packages for Nuclide in a single repository.<br/>
<br/>
Unfortunately, the Node/npm ecosystem makes developing multiple packages locally out of one repository a challenge. It has a heavy focus on semantic versioning and encourages dependencies to be fetched from <a href="https://www.npmjs.org/">https://www.npmjs.org/</a>. Yes, it is true that you can specify local dependencies in a <code>package.json</code> file, but then you cannot publish your <code>package.json</code> file as-is.<br/>
<br/>
Rather than embed local, relative paths in <code>package.json</code> files that get rewritten upon publishing to npm (which I think would be a reasonable approach), we created a script that walks our tree, symlinking local dependendencies under <code>node_modules</code> while using <code>npm install</code> to fetch the rest. By design, we specify the semantic version of a local dependency as <code>0.0.0</code> in a <code>package.json</code> file. These version numbers get rewritten when we publish packages to npm/apm.<br/>
<br/>
We also reject the idea of semantic versioning. (<a href="https://twitter.com/jashkenas">Jeremy Ashkenas</a> has a <a href="https://gist.github.com/jashkenas/cbd2b088e20279ae2c8e">great note about the false promises of semantic versioning</a>, calling it “romantic versioning.”) Most Node packages are published independently, leaving the package author to do the work of determining what the new version number should be based on the changes in the new release. Practically speaking, it is not possible to automate the decision of whether the new release merits a minor or major version bump, which basically means the process is imperfect. Even when the author gets the new version number right, it is likely that he/she sunk a lot of time into doing so.<br/>
<br/>
By comparison, we never publish a new version of an individual Nuclide package: we publish new versions of all of our packages at once, and it is always a minor-minor version bump. (More accurately, we disallow cycles in our own packages' dependencies, so we publish them in topological order, all with the same version number where the code is derived from the same commit hash in our GitHub repo.) It is indeed the case that for many of our packages, there is nothing new from version N to N+1. That is, the only reason N+1 was published is because some other package in our repo needed a new version to be published. We prefer to have some superfluous package publications than to sink time into debating new version numbers and updating scores of <code>package.json</code> files.<br/>
<br/>
Having all of the code in one place makes it easier to add processes that are applied across all packages, such as <a href="https://github.com/facebook/nuclide/blob/v0.0.32/scripts/lib/publishers/npm_publisher.py#L91">pre-transpiling Babel code before publishing it to npm or apm</a> or <a href="https://github.com/facebook/nuclide/blob/master/scripts/dev/eslint">running ESLint</a>. Also, because our packages can be topologically sorted, many of the processes that we run over all of our packages can be parallelized, such as building or publishing. Although this seems to fly in the face of traditional Node development, it makes our workflow dramatically simpler.<br/>
<h2 id='EZcACAxWHFO'>async/await</h2>
I can't say enough good things about using async/await, which is something that we can do because we use Babel. Truth be told, as a side-project, I have been trying to put all of my thoughts around it into writing. I'm at 35 pages and counting, so we'll see where that goes.<br/>
<br/>
An oversimplified explanation of the benefit is that it makes asynchronous code no harder to write than synchronous code. Many times, JavaScript developers know that designing code to be asynchronous will provide a better user experience, but give in to writing synchronous code because it's easier. With async/await, you no longer have to choose, and everyone wins as a result.<br/>
<h2 id='EZcACA1udkX'>Flow</h2>
Similar to async/await, I can't say enough good things about static typing, particularly optional static typing. I gave a talk at the first mloc.js, <a href="https://www.youtube.com/watch?v=PCL3dXQZ_Wk">“Keeping Your Code in Check,”</a> demonstrating different approaches to static typing in JavaScript, and argued why it is particularly important for large codebases. Flow is a fantastic implementation of static typing in JavaScript.<br/>
<h2 id='EZcACAzt90T'>Closing Thoughts</h2>
In creating Nuclide, we have written over 50,000 lines of JavaScript[1] and have created over 100 packages, 40 of which are Atom packages[2]. Our development process and modularized codebase has empowered us to move quickly. As you can tell from this essay (or from our <a href="https://github.com/facebook/nuclide/tree/master/scripts">scripts directory</a>), building out this process has been a substantial, deliberate investment, but I think it has been well worth it. We get to use the best JavaScript technologies available to us today with minimal setup and an edit/refresh cycle of a few seconds.<br/>
<br/>
Honestly, the only downside is that we seem to be ahead of Atom in terms of the number of packages it can support. We have an <a href="https://github.com/atom/atom/issues/8217">open issue against Atom about how to install a large number of packages more efficiently</a> (note this is a problem when you install Nuclide via <a href="https://atom.io/packages/nuclide-installer">nuclide-installer</a>, but not when you build from source). Fortunately, this should [mostly] be a simple matter of programming™: there is no fundamental design flaw in Atom that is getting in the way of a solution. We have had an outstanding working relationship with Atom thus far, finding solutions that improve things for not just Nuclide, but the entire Atom community. It has been a lot of fun to build on top of their platform. For me, working on Atom/Nuclide every day has been extremely satisfying due to the rapid speed of development and the tools we are able to provide to our fellow developers as a result of our work.<br/>
<br/>
[1] To exclude tests and third-party code, I got this number by running:<br/>
<pre id='EZcACARuSfC'>git clone https://github.com/facebook/nuclide.git<br>cd nuclide<br>git ls-files pkg | grep -v VendorLib | grep -v hh_ide | grep -v spec | grep -e '\.js$' | xargs wc -l</pre>
[2] There is a script in Nuclide that we use to list our packages in topologically sorted order, so I got these numbers by running:<br/>
<pre id='EZcACAqhThR'>./scripts/dev/packages | wc -l<br>./scripts/dev/packages --package-type Atom | wc -l</pre>
Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com412tag:blogger.com,1999:blog-2246259935483101149.post-68404203562543803072015-03-05T20:40:00.001-05:002015-03-05T20:40:50.201-05:00Trying to prove that WeakMap is actually weakI don't have a ton of experience with weak maps, but I would expect the following to work:
<pre class="prettyprint">
// File: example.js
var key = {};
var indirectReference = {};
indirectReference['key'] = (function() { var m = {}; m['foo'] = 'bar'; return m; })();
var map = new WeakMap();
map.set(key, indirectReference['key']);
console.log('Has key after setting value: %s', map.has(key));
delete indirectReference['key'];
console.log('Has key after deleting value: %s', map.has(key));
global.gc();
console.log('Has key after performing global.gc(): %s', map.has(key));</pre>
</pre>
I downloaded the latest version of <a href="https://iojs.org/">io.js</a> (so that I could have a JavaScript runtime with <code>WeakMap</code> and <code>global.gc()</code> and ran it as follows:
<pre>./iojs --expose-gc example.js</pre>
Here is what I see:
<pre>
Has key after setting value: true
Has key after deleting value: true
Has key after performing global.gc(): true
</pre>
Despite my best efforts, I can't seem to get <code>WeakMap</code> to give up the value that is mapped to <code>key</code>. Am I doing it wrong? Obviously I'm making some assumptions here, so I'm curious where I'm off.
<p>
Ultimately, I would like to be able to use <code>WeakMap</code> to write some tests to ensure certain objects get garbage collected and don't leak memory.Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com26tag:blogger.com,1999:blog-2246259935483101149.post-54241393399486179662014-08-19T02:17:00.000-04:002014-08-19T02:17:11.742-04:00Hacking on Atom Part I: CoffeeScript<p>
Atom is written in CoffeeScript rather than raw JavaScript. As you can imagine, <a href="https://discuss.atom.io/t/why-coffeescript/131">this is contentious with “pure” JavaScript developers</a>. I had a fairly neutral stance on CoffeeScript coming into Atom, but after spending some time exploring its source code, I am starting to think that this is not a good long-term bet for the project.
<p>
<h2>Why CoffeeScript Makes Sense for Atom</h2>
<p>
This may sound silly, but perhaps the best thing that CoffeeScript provides is a standard way to declare JavaScript classes and subclasses. Now before you get out your pitchforks, hear me out:
<p>
The “right” way to implement classes and inheritance in JavaScript <a href="http://bolinfest.com/javascript/inheritance.php">has been of great debate</a> for some time. Almost all options for simulating classes in ES5 are verbose, unnatural, or both. I believe that official class syntax is being introduced in ES6 not because JavaScript wants to be thought of as an object-oriented programming language, but because the desire for developers to project the OO paradigm onto the language today is so strong that it would be irresponsible for the TC39 to ignore their demands. This inference is based on the less aggressive <a href="http://wiki.ecmascript.org/doku.php?id=strawman:maximally_minimal_classes">maximally minimal classes</a> proposal that has superseded <a href="http://wiki.ecmascript.org/doku.php?id=harmony:classes">an earlier, more fully-featured, proposal</a>, as the former states that “[i]t focuses on providing an absolutely minimal class declaration syntax that all interested parties may be able to agree upon.” Hooray for design by committee!
<p>
<div style="background-color: #ddd; padding: 5px;">
<strong>Aside: Rant</strong>
<p>
The EcmaScript wiki is the most frustrating heap of documentation that I have ever used. Basic questions, such as, “Are Harmony, ES.next, and ES6 the same thing?” are extremely difficult to answer. Most importantly, it is impossible to tell what the current state of ES6 is. For example, with classes, there is a proposal under <a href="http://wiki.ecmascript.org/doku.php?id=harmony:classes">harmony:classes</a>, but another one at <a href="http://wiki.ecmascript.org/doku.php?id=strawman:maximally_minimal_classes">Maximally Minimal Classes</a>. Supposedly the latter supersedes the former. However, the newer one has no mention of static methods, which the former does and both Traceur and JSX support. Perhaps the anti-OO folks on the committee won and the transpilers have yet to be updated to reflect the changes (or do not want to accept the latest proposal)?
<p>
In practice, the best information I have found about the latest state of ES6 is at <a href="https://github.com/lukehoban/es6features">https://github.com/lukehoban/es6features</a>. I stumbled upon that via a post in <a href="https://groups.google.com/forum/#!topic/traceur-compiler-discuss/GZJh8Bmi37w">traceur-compiler-discuss</a> that linked to <a href="http://sett.ociweb.com/sett/settJul2014_files/settJul2014.pdf">some conference talk</a> that featured the link to the TC39 member’s README.md on GitHub. (The conference talk also has an explicit list of what to expect in ES7, in particular async/await and type annotations, which is not spelled out on the EcmaScript wiki.) Also, apparently using an entire GitHub repo to post a single web page is a thing now. What a world.
</div>
<p>
To put things in perspective, I ran a <code>git log --follow</code> on some key files in the <a href="https://github.com/atom/atom/tree/master/src">src</a> directory of the main Atom repo, and <a href="https://github.com/atom/atom/commit/06a46b1cb8a3420bf0cb77c70000d2d1e0c00d92">one of the earliest commits I found introducing a .coffee file is from August 24, 2011</a>. Now, let’s consider that date in the context of modern JavaScript transpiler releases:
<p>
<ul>
<li>CoffeeScript introduced its class syntax on February 27, 2010 (<a href="http://coffeescript.org/">version 0.5.3</a>).
<li><a href="http://www.youtube.com/watch?feature=player_detailpage&v=ntDZa7ekFEA">Google announced Traceur</a> on May 3, 2011 at JSConf US.
<li><a href="http://googlecode.blogspot.com/2011/10/dart-language-for-structured-web.html">Google announced Dart</a> on October 10, 2011.
<li><a href="http://blogs.msdn.com/b/somasegar/archive/2012/10/01/typescript-javascript-development-at-application-scale.aspx">Microsoft released TypeScript</a> on October 1, 2012.
<li><a href="https://www.youtube.com/watch?v=GW0rj4sNH2w">Facebook released React</a>, which included <a href="http://facebook.github.io/react/docs/jsx-in-depth.html">JSX</a> and other transforms on May 29, 2013 at JSConf US. Later, these transforms (including ES6 transforms) were spun out into their own repo, <a href="https://github.com/facebook/jstransform">jstransform</a>, on August 19, 2013.
</ul>
<p>
As you can see, at the time Atom was spinning up, CoffeeScript was the only mature transpiler. If I were starting a large JavaScript project at that time (well, we know <em>I</em> would have used Closure...) and wanted to write in a language whose transpilation could be improved later as JavaScript evolved, then CoffeeScript would have made perfect sense. Many arguments about what the “right JavaScript idiom is” (such as how to declare classes and subclasses) go away because CoffeeScript is more of a “there’s only one way to do it” sort of language.
<p>
As I mentioned in my comments on <a href="https://www.facebook.com/mbolin/posts/10100286938300148">creating a CoffeeScript for Objective-C</a>, I see three primary benefits that a transpiled language like CoffeeScript can provide:
<p>
<ul>
<li>Avoids boilerplate that exists in the target language.
<li>Subsets the features available in the source language to avoid common pitfalls that occur when those features are used in the target language.
<li>Introduces explicit programming constructs in place of unofficial idioms.
</ul>
<p>
Note that if you have ownership of the target language, then you are in a position to fix these things yourself. However, most of us are not, and even those who are may not be that patient, so building a transpiler may be the best option. As such, there is one other potential benefit that I did not mention in my original post, but has certainly been the case for CoffeeScript:
<p>
<ul>
<li>Influences what the next version of the target language looks like.
</ul>
<p>
But back to Atom. If you were going to run a large, open source project in JavaScript, you could potentially waste a lot of time trying to get your contributors to write JavaScript in the same way as the core members of the project. With CoffeeScript, there is much less debate.
<p>
Another benefit of using CoffeeScript throughout the project is that config files are in <a href="https://github.com/bevry/cson">CSON</a> rather than JSON. (And if you have been following this blog, you know that <a href="http://bolinfest.com/essays/json.html">the limitations of JSON really irritate me</a>. Go <a href="http://json5.org/">JSON5</a>!) However, CSON addresses many of shortcomings of JSON because it supports comments, trailing commas, unquoted keys, <em>and</em> multiline strings (via triple-quote rather than backslashes). Unfortunately, it also supports <em>all of JavaScript</em> <a href="https://github.com/bevry/cson/blob/master/README.md">as explained in the README</a>:
<blockquote>
“CSON is fantastic for developers writing their own configuration to be executed on their own machines, but bad for configuration you can't trust. This is because parsing CSON will execute the CSON input as CoffeeScript code...”
</blockquote>
Uh...what? Apparently there’s a project called <a href="https://github.com/groupon/cson-safe">cson-safe</a> that is not as freewheeling as the <a href="https://github.com/bevry/cson">cson</a> npm module, and <a href="https://github.com/atom/season/blob/master/package.json">it looks like Atom uses the safe version</a>. One of the unfortunate realities of the npm ecosystem is that the first mover gets the best package name even if he does not have the best implementation. C’est la vie.
<p>
<h2>Downsides of Atom Using CoffeeScript</h2>
<p>
I don’t want to get into a debate about the relative merits of CoffeeScript as a language here (though I will at some other time, I assure you), but I want to discuss two practical problems I have run into that would not exist if Atom were written in JavaScript.
<p>
First, many (most?) Node modules that are written in CoffeeScript have only the transpiled version of the code as part of the npm package. That means that when I check out Atom and run <code>npm install</code>, I have all of this JavaScript code under my <code>node_modules</code> directory that has only a vague resemblance to its source. If you are debugging and have source maps set up properly, then this is fine, but it does not play nice with <code>grep</code> and other tools. Although the JavaScript generated from CoffeeScript is fairly readable, it is not exactly how a JavaScript developer would write it by hand, and more importantly, single-line comments are stripped.
<p>
Second, because Atom is based on Chrome and Node, JavaScript developers writing for Atom have the benefit of being able to rely on the presence of ES6 features as they are supported in V8. Ordinary web developers do not have this luxury, so it is very satisfying to be able to exploit it! However, as ES6 introduces language improvements, they will not be available in CoffeeScript until CoffeeScript supports them. Moreover, as JavaScript evolves into a better language (by copying features from language like CoffeeScript), web developers will likely prefer “ordinary” JavaScript because it is likely that they will have better tool support. As the gap between JavaScript and CoffeeScript diminishes, the cost of doing something more nonstandard (i.e., using a transpiler) does not outweigh the benefits as much as it once did. Arguably, the biggest threat to CoffeeScript is CoffeeScript itself!
<p>
<h2>Closing Thoughts</h2>
<p>
Personally, I plan to develop Atom packages in JavaScript rather than CoffeeScript. I am optimistic about where JavaScript is going (particularly with respect to ES7), so I would prefer to be able to play with the new language features directly today. I don’t know how long my code will live (or how long ES6/ES7 will take), but I find comfort in knowing that I am less likely to have to rewrite my code down the road when JavaScript evolves. Finally, there are some quirks to CoffeeScript that irk me enough to stick with traditional JavaScript, but I’ll save those for another time.
Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com79tag:blogger.com,1999:blog-2246259935483101149.post-33121977363828811382014-08-19T01:53:00.001-04:002014-08-19T01:53:53.553-04:00Hacking on Atom SeriesI have started to spend some time hacking on <a href="https://atom.io/">Atom</a>, and I wanted to share some of my thoughts and learnings from this exploration. I have not posted on my blog in awhile, and this seems like the right medium to document what I have discovered (i.e., too long for a tweet; not profound enough for an essay; failing to answer in the form of a question suitable for the <a href="https://discuss.atom.io/">Atom discussion forum</a>).
<p>
In the best case, in places where I have stumbled with Atom’s design or implementation, I hope that either (1) someone sets me straight on best practices and why things work the way they do, or (2) to spur discussion on how to make things better. Hacking on Atom is a lot of fun, but I still have a lot to learn.Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com32tag:blogger.com,1999:blog-2246259935483101149.post-4303351555932779002013-10-28T12:59:00.001-04:002013-10-28T12:59:23.370-04:00Today I am making Appendix B of my book available for free onlineToday I am making one of the appendices of my book available for free: <a href="http://bolinfest.com/javascript/misunderstood.html">Appendix B: Frequently Misunderstood JavaScript Concepts</a>. You might expect that appendices are just extra junk that authors stick at the end of the book to up their page count, like reference material that is readily available online. And if that were the case, this would be an uncharitable and worthless thing to do.
<p>
As it turns out, many have told me that <em>Appendix B</em> has been the most valuable part of the book for them, so I assure you that I am not releasing the "dregs" of the text. Unsurprisingly, many many more folks are interested in JavaScript in general than are interested in Closure Tools, which is presumably why this appendix has been a favorite.
<p>
Because the manuscript was written in DocBook XML, it was fairly easy to programmatically translate the XML into HTML. For some reason, I do not have a copy of the figure from Appendix B that appears in the book, so I had to recreate it myself. Lacking real diagramming tools at the time, I <a href="http://bolinfest.com/javascript/prototype_chain2.png">created the original illustration using Google Docs</a>. It took several rounds with the illustrator at O'Reilly to get it reproduced correctly for my book. Since I was too embarrassed to include the Google Docs version in this "HTML reprint," I redid it using Omnigraffle, which I think we can all agree <a href="http://www.bolinfest.com/javascript/appendix_b_box_and_arrow.png">looks much better</a>.
<p>
In some ways, the HTML version is better than the print or ebook versions in that (1) the code samples are syntax highlighted, and (2) it is possible to hyperlink to individual sections. Depending on how this is received, I may make more chunks of the book available in the future. As I mentioned, the <a href="https://gist.github.com/bolinfest/7200441">script to convert the XML to HTML is already written</a>, though it does contain some one-off fixes that would need to be generalized to make it reusable for the other chapters.
<p>
<em>If you want to read the rest of <a href="http://www.amazon.com/gp/product/1449381871?ie=UTF8&tag=bolinfestcom-20&link_code=as3&camp=211189&creative=373489&creativeASIN=1449381871">Closure: The Definitive Guide (O'Reilly)</a>, you can find it on Amazon and other web sites that sell dead trees.</em>
Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com12tag:blogger.com,1999:blog-2246259935483101149.post-10491199959293899172013-08-17T21:39:00.001-04:002013-08-17T21:39:32.154-04:00Hello, World!Traditionally, "Hello, World!" would be the first post for a new blog, but this is post #100 for the Bolinfest Changeblog. However, I have posted so infrequently in the past year and a half that I thought it would be nice to check in and say hello to report on what I have been up to, as well as some of the projects that I maintain, as I have not been able to respond to all of the email requests for status updates. I'll start with the big one:
<h4>I moved to California and joined Facebook</h4>
The latter was not the cause for the former. I moved to California to co-found a startup (check!), but after killing myself for three months, I decided that the combination of working on consumer software that served millions (now billions) of users and getting a stable paycheck was something I really enjoyed. Now that I was in California, there were more companies where I could do that then there were in New York City, so I shopped around a bit. After working at Google for so long and feeling like Facebook was The Enemy/technically inferior (they write PHP, right?), I never thought that I would end up there.
<p>
Fortunately, I decided to test my assumptions about Facebook by talking to engineers there who I knew and greatly respected, and I learned that they were quite happy and doing good work. Facebook in 2012 felt more like the Google that I joined in 2005 than the Google of 2012 did, so I was also excited about that. Basically, Facebook is a lot smaller than Google, yet I feel like my potential for impact (both in the company and in the world) is much larger. I attended Google I/O this year, and although I was in awe of all of the announcements during the keynote, I imagined that whatever I could do at Google would, at best, be one tiny contribution to that presentation. Personally, I found the thought of feeling so small pretty demoralizing.
<p>
Fast-forward to life at Facebook, and things are exciting and going well. "Done is better than perfect" is one of our mantras at Facebook, which encourages us to move fast, but inevitably means that there is always some amount of shit that is broken at any given time. Coming from Google, this was pretty irritating at first, but I had to acknowledge that that mentality is what helped Facebook get to where it is today. If you can be Zen about all of the imperfections, you can find motivation in all the potential impact you can have. At least that's what I try to do.
<p>
Using Outlook instead of Gmail and Google Calendar is infuriating, though.
<h4>I spend my time writing Java instead of JavaScript</h4>
In joining a company that made its name on the Web, and <a href="http://www.amazon.com/gp/product/1449381871?ie=UTF8&tag=bolinfestcom-20&link_code=as3&camp=211189&creative=373489&creativeASIN=1449381871">knowing a thing or two about JavaScript</a>, I expected that I would be welcomed into the ranks of UI Engineering at Facebook. Luckily for me, that did not turn out to be the case.
<p>
At Facebook, there are many web developers who rely on abstractions built by the core UI Engineering (UIE) team. When I joined, my understanding was that if you wanted to be a UIE, first you needed to "pay your dues" as a web developer for at least a year so that you understood the needs of the Web frontend, and then perhaps you could be entrusted as a UIE. In general, this seems like a reasonable system, but all I was thinking was: "I made Google Calendar and <a href="http://bolinfest.com/javascript/caret-navigation.html">Google Tasks work in IE6</a>. I have paid my fucking dues."
<p>
Fortunately, some friends steered me toward joining mobile at Facebook, it being the future of the company and whatnot. At first, I was resistant because, to date, I had built a career on writing JavaScript, so I was reluctant to lay that skillset by the wayside and learn a new one. I also realized that that line of thinking is what would likely turn me into a dinosaur one day, so I needed to resist it.
<p>
Okay, so Android or iOS? I had just gotten a new iPhone after living miserably on a borrowed Nexus One for the previous six months (my 3GS only lasted 75% of the way through my contact), so the thought of doing Android development, which would require me to live with an Android phone every day again was pretty distasteful. However, my alternative was to write Objective-C (a language where you have to manage your own memory and suffers from dissociative identity disorder, as it has dual C and obj-C constructs for every programming concept) in an IDE I cannot stand (Eclipse figured out how to drag-and-drop editor tabs about a decade ago, why does XCode make organizing my windows so difficult?) on a platform that irritates me (every time the End key takes me to the end of the document instead of the end of the line, I want to shoot someone). As an Android developer, at least I could use Linux and write in Java (a language that I should probably be ashamed to admit that I enjoy, but whose effectiveness in programming in the large continues to make me a supporter).
<p>
Historically, the iPhone has always been a beloved piece of hardware (and is the device app developers flock to), whereas Android devices have been reviled (anyone remember the G1?). The same was true at Facebook when I joined: more engineers owned iPhones themselves (for a long time, it felt like the only people who owned Android phones were Google employees who had been given them for free, though obviously that has changed...), so more of them wanted to work on the iPhone app, and hence Facebook's iPhone app was a lot better than its Android app. Again, this is what made the possibility of working on Android simultaneously frustrating and intriguing: the codebase was a nightmare, but there was a huge opportunity for impact. For my own career, it was clear that I should learn at least one of Android or iOS, and Android seemed like the one where I was likely to have the most success, so that's what I went with.
<p>
But like I said, the codebase was a nightmare. What was worse was that building was a nightmare. We were using a hacked up version of the standard Ant scripts that Google provides for Android development. Like many things at Facebook, our use case is beyond the scale of ordinary use cases, so the "standard" solution does not work for us. We actually had many Ant scripts that called one another, which very few people understood. As you can imagine, developers' builds frequently got into bad states, in which case people could not build, or would do a clean build every time, which is very slow. In sum, Android development was not fun.
<p>
After dealing with this for a couple of weeks and wondering what I had gotten myself into, I said, "Guys, this is ridiculous: I am going to create a new build system. We can't keep going like this." As you can imagine, there were a lot of objections:
<ul>
<li>"Why can't we use an existing build system like Ninja or Gradle?"
<li>"What happens when Google fixes its build system? Then we'll have to play catch up."
<li>"Would you actually support your custom build system long-term?"
<li>"Seriously, why do engineers always think they can build a better build system?"
</ul>
Resounding support, I had not.
<p>
At that point, I had no credibility in Android, and more importantly, no credibility at Facebook. If I decided to go against the grain and my build system was a failure, I would not have a any credibility in the future, either. On the other hand, there was no way I was going to endure life as an Android developer if I had to keep using Ant, so I forged ahead and started working on my own build system.
<p>
In about a month and a half, I built and deployed <a href="http://facebook.github.io/buck/"><strong>Buck</strong>, an Android build system</a>. Builds were twice as fast, so the number of objections decreased considerably. Life at Facebook was, and continues to be, very good.
<h4>I spoke at some conferences...</h4>
Thus far in 2013, I gave two external talks at conferences, two internal talks at Facebook, and one best man's speech. Public speaking really stresses me out, yet I continue to volunteer to do it. (The best man's speech was the shortest of the five, but probably the most stressful. It's a friendly audience, but expectations are high!) I used to think it was weird when I read that actors like John Malkovich often avoid ever seeing the movies that they are in, but given that I rarely watch recordings of talks I have given, I guess I can understand why. Like listening to your own voice on tape, it always seems worse to you than to anyone else.
<p>
This year, for the first time, I was invited to speak at a conference. The name of the conference was <a href="http://mloc-js.com/">mloc.js</a> (mloc = Millon Lines of Code), and the focus was on scaling JavaScript development. Both programming in the large and JavaScript development are topics near and dear to my heart, so this seemed like a great fit for me. I was also honored to be invited to speak anywhere, let alone a foreign country (the conference was in Budapest), so I accepted.
<p>
Since some nice group of people was paying me to fly to their country and stay in a hotel, I felt that I owed it to them to give a presentation that did not suck. I also felt like they deserved something new, so rather than rehash one of my talks from 2011 promoting Google Closure, I decided to create a new talk from scratch. The two topics that came to mind were <strong>optional typing</strong> and <strong>async paradigms in JavaScript</strong>. Initially, I sketched out one talk about both topics, but I quickly realized that I would only have time for one of the two. Although I feel like someone needs to go in and slap the Node community around until they stop creating APIs based on callbacks instead of Promises (in hopes that we ultimately get an <code>await</code> keyword in JavaScript that takes a Promise), I decided that I could deliver a more thought-provoking talk about optional typing that included references to Legoes and OCaml. And so that's what I did:
<p>
<iframe width="560" height="315" src="//www.youtube.com/embed/PCL3dXQZ_Wk" frameborder="0" allowfullscreen></iframe>
<p>
(The title of the talk was <em>Keeping Your Code In Check</em>, which is not very specific to what I actually spoke about. Often, conference organizers want talk names and abstracts early in the process so they can make schedules, create programs, etc. In contrast, conference presenters want to do things at the last minute, so I supplied an amorphous title for my talk, figuring that whatever I ended up speaking about could fit the title. I'd say that it worked.)
<p>
I spent hours and hours working on this talk. I was so exhausted when I got to the end of the presentation that I botched some of the Q&A, but whatever. The <a href="https://docs.google.com/presentation/d/1bO1OlVtMUHkXS7aApQsX1hbaQ6qCJ01lrjTv0nChIK0/edit?usp=sharing">slides are available online</a>, but I always find it hard to get much from a deck alone, and watching the full video takes a long time. In the future, I hope to convert the talk into an essay that is easier (and faster) to consume.
<p>
I tried my best not to make Google Closure the focus of my talk, instead trying to expose people to the tradeoffs of various type systems. Google Closure was mainly in there because (1) I did not have to spend any time researching it, (2) it is one of the few successful examples of optional typing out there, and (3) if I sold you on optional typing during my talk, then Google Closure would be something that you as a JavaScript developer could go home and use. I even gave TypeScript some airtime during my talk (though I omitted Dart, since Google was a sponsor of the conference, so they had their chance to pitch), so I thought that was pretty fair. By comparison, if you look at the <a href="http://mloc-js.com/#schedule">conference schedule</a>, almost everyone else in the lineup was shilling their own technology, which might be the perfect solution for a handful of members of the audience, but was less likely to be of universal interest. That was my thinking, anyway.
<h4>...which included open sourcing the Android build tool that I wrote (Buck)</h4>
The second conference I talk I gave was at Facebook's own <a href="https://developers.facebook.com/events/mobiledevcon/newyork/">mobile developer conference in New York City</a>. As I mentioned above, the whole custom Android build system thing worked out pretty well internally, so we deemed it worthy to open source it. This conference seemed like a good vehicle in which to do so, so once we launched Facebook Home on April 4 (which I worked on!), I focused 100% on my conference talk on April 18th.
<p>
Though the presentation itself was only a small portion of what I needed to do. We had to clean up the code, excise any comments that referred to secret, internal stuff, and generate sufficient documentation so that people could actually use the thing. For once, I successfully delegated, and everything got done on time, which was great. The only downside was that I was not able to spend as much time rehearsing my talk as I would have liked, but I think that it came out okay:
<p>
<iframe width="560" height="315" src="//www.youtube.com/embed/CdNw6mRpsDI" frameborder="0" allowfullscreen></iframe>
<p>
Note that the title of the talk was <em>How Facebook builds Facebook for Android</em>, which does not mention anything about Buck or a build system at all. This is because we decided to make the open sourcing a surprise! I asked whether I should do "the big reveal" at the beginning or the end of the talk. I was advised that if I did it at at the beginning, then people would stop paying attention to me and start reading up on it on their laptops. If I did it at the end, then I would not leave myself any time to talk about Buck. So clearly, the only reasonable alternative was to "do it live," which is what I ended up doing. If you fast-forward to around 22 minutes in, you can see my "showmansip" (that term definitely deserves to be in quotes) where I get the audience to ask me to flip the switch in GitHub from private to public. It was nerve-wracking, but great fun.
<h4>I updated plovr</h4>
With so much going on professionally, and my new focus on Android, it has not left much time for <a href="http://plovr.com/">plovr</a>. This makes me sad, as the October 2012 release of plovr was <a href="http://code.google.com/p/plovr/downloads/list">downloaded over 9000 times</a>, so that's a lot of developers' lives who could be improved by an updated release. Last month, I finally <a href="http://code.google.com/p/plovr/downloads/detail?name=plovr-81ed862.jar&can=2&q=#makechanges">put out a new release</a>. Part of the reason this took so long is that, in the past 18 months, Oracle dropped support for Java 6 and Google moved some of the SVN repos for Closure Tools over to Git. Both of these things broke the "turn the crank" workflow I had put in place to facilitate turning out new versions of plovr with updated versions of Closure Tools. I finally created a new workflow, and as part of that work, I moved plovr off of Google Code and <a href="https://github.com/bolinfest/plovr">onto GitHub</a>. I have already accepted a couple of pull requests, so this seems like progress.
<p>
Admittedly, now that I am writing Java rather than JavaScript every day, I don't have as much skin in the game when it comes to moving plovr forward. I am hoping to delegate more of that responsibility to those who are using plovr actively. I have some specific individuals in mind, though unfortunately have not to made the effort to reach out quite yet.
<h4>Chickenfoot</h4>
<a href="http://groups.csail.mit.edu/uid/chickenfoot/">Chickenfoot</a> is the venerable end-user programming tool for customizing and automating the Web that I built as part of my Master's thesis at MIT in 2005. It was built as a Firefox extension, as Firefox gave us a unique way to make invasive modifications to a real web browser, while also providing a distribution channel to share our experimental changes with ordinary people (i.e., non-programmers). This was pretty spectacular.
<p>
The <a href="http://groups.csail.mit.edu/uid/chickenfoot/install.html">last official release of Chickenfoot (1.0.7) by MIT was in 2009</a>, and targeted Firefox 3. As the current release of Firefox is version 23.0.1, clearly Chickenfoot 1.0.7 is a bit behind. After the NSF grant for Chickenfoot at MIT ran out (which is what effectively ended "official" development), I <a href="https://github.com/bolinfest/chickenfoot">moved the source code to GitHub</a> and got some help to release <a href="https://github.com/downloads/bolinfest/chickenfoot/chickenfoot-1.0.8.xpi">Chickenfoot 1.0.8</a>, which was verified to work on Firefox 4.0-7.0. It may very well also work on Firefox 23.0.1, but I have not kept up.
<p>
Historically, we hosted Chickenfoot downloads on MIT's site (instead of Mozilla's offical Add-ons site) because we wanted to control our releases. My experience with submitting an extension to the Mozilla Add-ons site has been no better than my experience with submitting an iOS app to Apple's app store, in that a review on the vendor's end is required, and rejections are frustrating and do happen. Since we were not going to host releases on MIT's site anymore, I decided to give the Add-ons store another try. Here are the contents of the rejection letter from January 19, 2012 (emphasis is mine):
<p>
<blockquote class="preserve_space">Your add-on, Chickenfoot 1.0.9, has been reviewed by an editor and did not meet the criteria for being hosted in our gallery.
Reviewer:
Nils Maier
Comments:
Your version was rejected because of the following problems:
1) <strong>This version contains binary, obfuscated or minified code.</strong> We need to review all of your source code in order to approve it. Please send the following items to amo-admin-reviews@mozilla.org:
* A link to your add-on listing.
* Whether you're nominating this version for Full Review or Preliminary Review.
* The source files of your add-on, either as an attachment or a link to a downloadable package.
* For updates, including the source code of the previously approved version or a diff is also helpful.
You can read our policies regarding source code handling here: https://addons.mozilla.org/en-US/developers/docs/policies/reviews#section-binary.
2) <strong>Your add-on uses the 'eval' function to parse JSON strings</strong>. This can be a security problem and we normally don't allow it. Please read https://developer.mozilla.org/en/Using_JSON_in_Firefox for more information about safer alternatives.
3) <strong>Your add-on uses the 'eval' function unnecessarily, which is something we normally don't accept</strong>. There are many reasons *not* to use 'eval', and also simple alternatives to using it. You can read more about it here: https://developer.mozilla.org/en/XUL_School/Appendix_C:_Avoid_using_eval_in_Add-ons
4) <strong>Your add-on creates DOM nodes from HTML strings containing unsanitized data, by assigning to innerHTML or through similar means.</strong> Aside from being inefficient, this is a major security risk. For more information, see https://developer.mozilla.org/en/XUL_School/DOM_Building_and_HTML_Insertion
(All of the above is already marked by the validator as warnings. I stopped at this point; there might be more issues)
Please fix them and submit again. Thank you.
This version of your add-on has been disabled. You may re-request review by addressing the editor's comments and uploading a new version. To learn more about the review process, please visit https://addons.mozilla.org/developers/docs/policies/reviews#selection
If you have any questions or comments on this review, please reply to this email or join #amo-editors on irc.mozilla.org
</blockquote>
<p>
An underlying principle of Chickenfoot is to empower an end-user to be able to write a script, which we would <code>eval()</code>, and that script could modify whatever the user wanted. Clearly, our goals flew in the face of the security model that extension reviewers at Mozilla are trying to maintain. The thought of trying to clean things up and then convince a reviewer that what we were doing was legitimate seemed overwhelming. (Also, the code had been bastardized by graduate students over the years, so the thought of me trying to decipher it after seven years of writing JavaScript in an industry setting was equally overwhelming.)
<p>
(This experience is what engendered my (arguably ill-advised) quip to <a href="http://en.wikipedia.org/wiki/Mike_Shaver">Mike Shaver</a> during my first week at Facebook: "I used to like Firefox." I didn't know that, prior to Facebook, Mike had been the VP of Engineering at Mozilla, but I'm pretty sure that everyone within earshot did. Awk-ward. I think we spent the next hour or two complaining about calendaring standards and timezones. We're friends now.)
<p>
Given the uphill battle of getting Chickenfoot into the Add-ons site coupled with the general need for a full-on rewrite, people sometimes ask: when are we going to port Chickenfoot to Chrome? To be honest, I am not sure whether that's possible. For starters, Chickenfoot leverages <code>Thread.sleep()</code> from XPCOM so that we can do a busy-wait to suspend JavaScript execution while pages are loading, which enables end-user programmers to write linear code. (I think we would have to run a JavaScript interpreter written in JavaScript to get around this, which is doable, albeit a bit crazy.) Further, for security reasons (or maybe for forward/backward-compatibility reasons), the APIs available to browser extensions are historically more restricted in Chrome than they are in Firefox. (Last I checked, Mozilla would not allow you to declare your extension compatible with all future versions of Firefox, so you had to specify a <code>maxVersion</code> and update your extension with every new Firefox release, which drove me insane.) In Firefox, the entire browser is scriptable, so if you can get a reference to any part of the Firefox UI in JavaScript, you can basically do whatever you want with it. This has not been my experience with Chrome. (On the bright side, I have never seen an instance where a Chrome user has lost half of his screen real-estate to browser toolbars, which I have certainly seen in both Firefox and more notably, Internet Explorer.)
<p>
It actually breaks my heart when I get an email from an end-user programmer who asks for a new version of Chickenfoot. Unlike a full-fledged programmer, most of these folks don't have the tools or knowledge in order to build an alternative solution. Some of them continue to run Firefox 2.0 or 3.0 just so the workflow they created around Chickenfoot still works. I would love to have a clean Chickenfoot codebase from which I could pop out an up-to-date extension, a piece of software that leverages everything I have learned about development over the past decade, but I don't have the cycles.
<h4>What's next?</h4>
At Facebook, I am extremely excited about the work that we're doing on Buck, and on mobile in general. We are consistently making Android developers' lives better, and we are <a href="https://github.com/facebook/buck/commits/master">frequently exporting these improvements to the community</a>. I expect to continue working on Buck until we run out of ideas for how we can make developers more efficient, or at least until we have enough support for Buck in place that I feel comfortable focusing on something else. I should probably also mention that we're most definitely hiring at Facebook, so if you're also interested in working on a high-quality, open source build tool, then <a href="mailto:bolinfest@gmail.com">please drop me a line</a>.
Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com12tag:blogger.com,1999:blog-2246259935483101149.post-71246052066384602772013-05-18T17:20:00.000-04:002013-07-22T02:51:25.335-04:00Chromebook Pixel gives me an excuse to fork JSNESFor a long time, I have been <a href="http://blog.bolinfest.com/2009/11/nes-emulators-and-applescript.html">intrigued by NES emulators</a>. I was extremely excited when <a href="http://benfirshman.com/">Ben Firshman</a> released an <a href="http://benfirshman.com/projects/jsnes/">NES emulator in JavaScript (JSNES)</a> over three years ago. At the time, I noted that JSNES ran at almost 60fps in Chrome, but barely trickled along in Firefox. <s>It's pretty shocking to see that that is still the case today: in Firefox 21.0 on Ubuntu, I am seeing <em>at most</em> 2fps while sitting idle on the title screen for Dr. Mario. That's pretty sad.</s> (It turns out I was getting 1fps because I had Firebug enabled. Maybe that's why my perception of Firefox has diminished over time. Someone at Mozilla should look into that...) This is the only browser benchmark I care about these days.
<p>
When <a href="http://blog.chromium.org/2012/07/introducing-getusermedia-and-javascript.html">the Gamepad API was first announced for Chrome</a>, I tried to get my <a href="http://www.retrousb.com/product_info.php?cPath=21&products_id=28">USB NES RetroPort</a> controllers to work, but Chrome did not seem to recognize them. I made a mental note to check back later, assuming the API would eventually be more polished. Fast-forward to this week where I was fortunate enough to attend <a href="https://developers.google.com/events/io/2013/">Google I/O</a> and score a <a href="http://www.amazon.com/gp/product/B009LL9VDG/ref=as_li_qf_sp_asin_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=B009LL9VDG&linkCode=as2&tag=bolinfestcom-20">Chromebook Pixel</a>. It seemed like it was time to give my controllers another try.
<p>
Last night, I plugged a RetroPort into the Pixel and visited the <a href="http://www.html5rocks.com/en/tutorials/doodles/gamepad/gamepad-tester/tester.html">Gamepad API test page</a>, and it worked! Obviously the next thing I had to do was wire this up to JSNES, so that was the first thing I did when I woke up this morning. I now have <a href="https://github.com/bolinfest/jsnes">my own fork of the JSNES project</a> where I added support for the RetroPort controllers as well as loading local ROMs from disk. As I admit in my <a href="https://github.com/bolinfest/jsnes/blob/master/README.md">README.md</a>, there are already outstanding <a href="https://github.com/bfirsh/jsnes/pulls">pull requests</a> for these types of things, but I wanted to have the fun of doing it myself (and an excuse to poke around the JSNES code).
<p>
Finally, the one outstanding feature I hoped to add was loading ROMs from Dropbox or GDrive using pure JavaScript. Neither product appears to have a simple JavaScript API that will give you access to file data like the <a href="http://www.w3.org/TR/FileAPI/">W3C File API</a> does. Perhaps I'll host my fork of JSNES if I can ever add such a feature...
<p>
<strong>P.S.</strong> I should admit that one does not need a Pixel to do these types of things. However, having a new piece of hardware and APIs that have been around long enough that you expect them to be stable is certainly a motivating factor. It's nice to have a project that doesn't involve any yak-shaving, such as figuring out how to install a version of Chrome from the Beta channel!
Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com12tag:blogger.com,1999:blog-2246259935483101149.post-19781284771076216422013-01-02T11:02:00.000-05:002013-01-02T11:02:15.394-05:00Generating Google Closure JavaScript from TypeScriptOver a year ago, I created a prototype for <a href="http://bolinfest.com/coffee/" target="_blank">generating Google Closure JavaScript from CoffeeScript</a>. Today, I am releasing <a href="http://bolinfest.com/typescript/" target="_blank">a new, equivalent prototype</a> that uses <a href="http://typescript.codeplex.com/" target="_blank">TypeScript</a> as the source language.
<p>
<strong><shamelessplug>I will be speaking at <a href="http://mloc-js.com/" target="_blank">mloc.js</a> next month in Budapest, Hungary, which is a conference on scaling JavaScript development. There, I will discuss various tools to help keep large codebases in check, so if you are interested in hearing more about this work, come join me in eastern Europe in February!</shamelessplug></strong>
<p>
I have been interested in the challenge of Web programming in the large for quite some time. I believe that <a href="https://developers.google.com/closure/" target="_blank">Google Closure</a> is currently still the best option for large-scale JavaScript/Web development, but that it will ultimately be replaced by something that is less verbose. Although <a href="http://www.dartlang.org/" target="_blank">Dart</a> shows considerable promise, <a href="http://www.dartlang.org/articles/m2-whats-new/">I am still dismayed by the size of the JavaScript that it generates</a>. By comparision, if TypeScript can be directly translated to JavaScript that can be compiled using the <a href="https://developers.google.com/closure/compiler/docs/api-tutorial3" target="_blank">advanced mode of the Closure Compiler</a>, then we can have all the benefits of optimized JavaScript from Closure without the verbosity. What's more, because TypeScript is a superset of JavaScript, I believe that its syntax extensions have a chance of making it into the ECMAScript standard at some point, whereas <a href="http://www.infoworld.com/d/application-development/what-javascripts-inventor-really-thinks-about-google-dart-185045" target="_blank">the chances of Dart being supported natively in all major browsers is pretty low</a>.
<!--
<p>
As discussed in the <em>Learn more</em> section of the <a href="http://bolinfest.com/typescript/">prototype page</a>, I think that this prototype has more potential than my CoffeeScript one because TypeScript's grammar has a built-in affordance for optional type information whereas CoffeeScript does not.
-->
<p>
In terms of hacking on TypeScript itself, getting started with the codebase was a pain because I develop on Linux, and presumably TypeScript's developers do not. Apparently <a href="http://codeplex.codeplex.com/workitem/26133" target="_blank">Microsoft does not care that cloning a CodePlex repository using Git on Linux does not work</a>. Therefore, in order to get the TypeScript source code, I had to switch to a Mac, clone the repo there, and then <a href="https://github.com/bolinfest/typescript/" target="_blank">import it into GitHub</a> so I could clone it from my Linux box.
<p>
Once I got the code on my machine, the next step was figuring out how to build it. TypeScript includes a <code>Makefile</code>, but it contains <em>batch</em> commands rather than <em>Bash</em> commands, so of course those did not work on Linux, either. Modifying the <code>Makefile</code> <a href="https://github.com/bolinfest/typescript/commit/c1c6cca7a6a3c81288cba82b197fb16014e16a11" target="_blank">was a bit of work</a>, and unfortunately will likely be a pain to reconcile with future upstream changes. It would be extremely helpful if Microsoft switched to a cross-platform build tool (or maintained two versions of the <code>Makefile</code> themselves).
<p>
Once I was able to build, I wanted to start hacking and exploring the TypeScript codebase. I suspected that reading the <code>.ts</code> files as plaintext without any syntax highlighting would be painful, so I Googled to see if there were any good alternatives to Visual Studio with TypeScript support that would run on Linux. Fortunately, JetBrains recently released <a href="http://blog.jetbrains.com/webide/2012/11/phpstorm-webstorm-6-0-early-access-program-started/" target="_blank">a beta of PhpStorm and WebStorm 6.0 that includes support for TypeScript</a>, so I decided to give it a try. Given that both TypeScript and support for it in WebStorm are very new, I am quite impressed with how well it works today. Syntax highlighting and ctrl+clicking to definitions work as expected, though PhpStorm reports many false positives in terms of TypeScript errors. I am optimistic that this will get better over time.
<p>
Now that I have all of the setup out of the way, the TypeScript codebase is pretty nice to work with. The source code does not contain much in the way of comments, but the code is well-structured and the class and variable names are well-chosen, so it is not too hard to find your way. I would certainly like to upstream some of my changes and contribute other fixes to TypeScript, though if that Git bug on CodePlex doesn't get fixed, it is unlikely that I will become an active contributor.
<p>
<em>Want to learn more about Closure? Pick up a copy of my book, <a href="http://www.amazon.com/gp/product/1449381871?ie=UTF8&tag=bolinfestcom-20&link_code=as3&camp=211189&creative=373489&creativeASIN=1449381871">Closure: The Definitive Guide (O'Reilly)</a>, and learn how to build sophisticated web applications like Gmail and Google Maps!</em>Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com44tag:blogger.com,1999:blog-2246259935483101149.post-18450256839013019032012-04-25T17:38:00.001-04:002012-04-25T17:38:32.171-04:00New Essay: Caret Navigation in Web ApplicationsFor those of you who don't follow me on <a href="http://twitter.com/bolinfest">Twitter</a>, yesterday I announced a new essay: <a href="http://bolinfest.com/javascript/caret-navigation.html">Caret Navigation in Web Applications</a>. There I talk about some of the things that I worked hard to get right when implementing the UI for Google Tasks.
<p>
This essay may not be for JavaScript lightweights, though it didn't seem to gain much traction on <a href="http://news.ycombinator.com/item?id=3884353">Hacker News</a> or on <a href="http://www.reddit.com/r/javascript/comments/sq2pi/how_caret_navigation_works_in_google_tasks/">Reddit</a> (which is particularly sad because Reddit readers were much of the reason that I took the time to write it in the first place). I suppose it would have been more popular if I had made it a lot shorter, but I believe it would have been much less useful if I did. That's just my style, I guess: <em>Closure: The Definitive Guide</em> was originally projected to be 250-300 pages and it ended up at 550.
<p>
I also wanted to take this opportunity to make some more comparisons about Google Tasks vs. Asana that weren't really appropriate for the essay:
<ul>
<li>It's annoying that Asana does not support real hierarchy. <a href="http://www.quora.com/Asana/Why-does-Asana-have-only-two-levels-of-hierarchy">Asana has responded to this on Quora</a>, but their response is really about making Asana's life easier, not yours. What I think is really interesting is that if they ever decide to reverse their decision, it will really screw with their keyboard shortcuts since they currently heavily leverage <em>Tab</em> as a modifier, but that may not work so well when users expect <em>Tab</em> to indent (and un-indent).
<li>It's pretty subtle, but when you check something off of your list in Google Tasks, there is an animated strikethrough. I did this by creating a timing function that redraws the task with the </s> in an increasing position to achieve the effect. This turned out to be tricky to get right because you want the user to be able to see it (independent of task text length) without it feeling slow for long tasks. Since so many people seem to delight in crossing things off their list, this always seemed like a nice touch in Google Tasks. I don't find crossing things off my list in Asana as satisfying.
<li>Maybe I haven't given it enough time yet, but I have a lot of trouble with Asana's equivalent of the details pane. The thing seems to slide over when I don't want it to, or I just want to navigate through my tasks and have the details pane update and it starts bouncing on me...I don't know. Am I alone? In Tasks, it's shift-enter to get in and shift-enter to get out. It's not as sexy since there's no animation, but it's fast, it works, and it doesn't get in my way.
</ul>
<p>
Finally, some friends thought I should include a section in the essay about "what browser vendors can do to help" since clearly the browser does not have the right APIs to make things like Google Tasks easy to implement. As you may imagine, I was getting pretty tired of writing the essay, so I didn't include it, but this is still an important topic. At a minimum, an API to return the current (<var>x</var>, <var>y</var>) position of the cursor as well as one to place it at an (<var>x</var>, <var>y</var>) position would help, though even that is ambiguous since a cursor is generally a line, not a point. It would be interesting to see how other UI toolkits handle this.
<p>
Oh, and I should mention that if I were to reimplement Google Tasks again today, I would not implement it the way that I did, if I had enough resources. If you look at Google Docs, you will see that your cursor is not a real cursor, but a blinking <div>. That means that Docs is taking responsibility for turning all of your mouse and keyboard input into formatted text. It eliminates all of the funny stuff with <code>contentEditable</code> by basically rebuilding...well, everything related to text editing in the browser. If I had libraries like that to work with (that already have support for pushing changes down to multiple editors in real time), then I would leverage those instead. Of course, if you don't have code like that available, then that is a pretty serious investment to make. Engineering is all about picking the right tradeoffs!
<p>
<em>Want to learn more the technologies used to build Google Tasks? Pick up a copy of my new book, <a href="http://www.amazon.com/gp/product/1449381871?ie=UTF8&tag=bolinfestcom-20&link_code=as3&camp=211189&creative=373489&creativeASIN=1449381871">Closure: The Definitive Guide (O'Reilly)</a>.</em>Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com7tag:blogger.com,1999:blog-2246259935483101149.post-85353721307015370302011-10-28T17:33:00.005-04:002011-10-28T18:18:15.165-04:00I want a magical operator to assuage my async woes (and a pony)Lately, I have spent a lot of time thinking about how I could reduce the tedium of async programming in JavaScript. For example, consider a typical implementation of using an <code>XMLHttpRequest</code> to do a <code>GET</code> that returns a Deferred (this example uses <a href="http://api.jquery.com/category/deferred-object/">jQuery</a>'s implementation of Deferred, but there are <a href="http://mochi.github.com/mochikit/doc/html/MochiKit/Async.html">many</a> <a href="http://code.google.com/p/closure-library/source/browse/trunk/third_party/closure/goog/mochikit/async/deferred.js?r=195">other</a> reasonable implementations, and there is a great need to settle on a standard API, but that is a subject for another post):<pre class="prettyprint">/** @return {Deferred} */
var simpleGet = function(url) {
var deferred = new $.Deferred();
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function() {
if (xhr.readyState == 4) {
if (xhr.status == 200) {
deferred.resolve(xhr.responseText);
} else {
deferred.reject(xhr.status);
}
}
};
xhr.open('GET', url, true /* async */);
xhr.send(null);
return deferred;
};</pre>What I want is a magical <code style="color:green">~</code> operator that requires (and understands) an object that implements a well-defined Deferred contract so I can write my code in a linear fashion:<pre class="prettyprint">/** @return {Deferred} */
var getTitle = function(url) {
if (url.substring(0, 7) != 'http://') url = 'http://' + url;
var html = ~simpleGet(url);
var title = html.match(/<title>(.*)<\/title>/)[1];
return title;
};
/** Completes asynchronously, but does not return a value. */
var logTitle = function(url) {
try {
var title = ~getTitle(url);
console.log(title);
} catch (e) {
console.log('Could not extract title from ' + url);
}
};</pre>Unfortunately, to get this type of behavior today, I have to write something like the following (and even then, I am not sure whether the error handling is quite right):<pre class="prettyprint">/** @return {Deferred} */
var getTitle = function(url) {
if (url.substring(0, 7) != 'http://') url = 'http://' + url;
var deferred = new $.Deferred();
simpleGet(url).then(function(html) {
var title = html.match(/<title>(.*)<\/title>/)[1];
deferred.resolve(title);
}, function(error) {
deferred.reject(error);
});
return deferred;
};
/** Completes asynchronously, but does not return a value. */
var logTitle = function(url) {
var deferred = getTitle(url);
deferred.then(function(title) {
console.log(title);
}, function(error) {
console.log('Could not extract title from ' + url);
})
};</pre>I am curious how difficult it would be to programmatically translate the first into the second. I spent some time playing with <a href="https://developer.mozilla.org/en/New_in_JavaScript_1.7">generators in Firefox</a>, but I could not seem to figure out how to emulate my desired behavior. I also spent some time looking at the <a href="http://wiki.ecmascript.org/doku.php?id=strawman:async_functions&s=async">ECMAScript</a> <a href="http://wiki.ecmascript.org/doku.php?id=strawman:deferred_functions">wiki</a>, but it is unclear whether they are talking about exactly the same thing.<br />
<br />
In terms of modern alternatives, it appears that <a href="http://blogs.msdn.com/b/csharpfaq/archive/2010/10/28/async.aspx">C#'s await and async keywords</a> are the closest thing to what I want right now. Unfortunately, I want to end up with succinct JavaScript that runs in the browser, so I'm hoping that either <a href="http://jashkenas.github.com/coffee-script/">CoffeeScript</a> or <a href="http://www.dartlang.org/">Dart</a> will solve this problem, unless the ECMAScript committee gets to it first!<br />
<br />
Please feel free to add pointers to related resources in the comments. There is a lot out there to read these days (the Dart mailing list alone is fairly overwhelming), so there's a good chance that there is something important that I have missed.<br />
<br />
<b>Update (Fri Oct 28, 6:15pm):</b> I might be able to achieve what I want using <a href="http://code.google.com/p/traceur-compiler/wiki/LanguageFeatures#Deferred_Functions">deferred functions</a> in <a href="http://code.google.com/p/traceur-compiler/">Traceur</a>. Apparently I should have been looking at the <a href="http://wiki.ecmascript.org/doku.php?id=strawman:deferred_functions">deferred functions</a> strawman proposal more closely: I skimmed it and assumed it was only about defining a Deferred API.<br />
<br />
<em>Want to learn about a suite of tools to help manage a large JavaScript codebase? Pick up a copy of my new book, <a href="http://www.amazon.com/gp/product/1449381871?ie=UTF8&tag=bolinfestcom-20&link_code=as3&camp=211189&creative=373489&creativeASIN=1449381871">Closure: The Definitive Guide (O'Reilly)</a>, and learn how to build sophisticated web applications like Gmail and Google Maps!</em>Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com70tag:blogger.com,1999:blog-2246259935483101149.post-85433481084730162502011-08-02T13:44:00.000-04:002011-08-02T13:44:21.341-04:00An Examination of goog.base()A few weeks ago, I started working on <a href="https://github.com/bolinfest/coffee-script">adding an option to CoffeeScript to spit out Closure-Compiler-friendly JavaScript</a>. In the process, I discovered that calls to a superclass constructor in CoffeeScript look slightly different than they do in the Closure Library. For example, if you have a class <code>Foo</code> and a subclass <code>Bar</code>, then in CoffeeScript, the call [in the generated JavaScript] to invoke <code>Foo</code>'s constructor from <code>Bar</code>'s looks like:<br />
<pre>Bar.__super__.constructor.call(this, a, b);</pre>whereas in the Closure Library, the canonical thing is to do the following, identifying the superclass function directly:<br />
<pre>Foo.call(this, a, b);</pre>The two are functionally equivalent, though CoffeeScript's turns out to be slightly simpler to use as a developer because it does not require the author to know the name of the superclass when writing the line of code. In the case of CoffeeScript, where JavaScript code generation is being done, this localization of information makes the translation of CoffeeScript to JavaScript easier to implement.<br />
<br />
The only minor drawback to using the CoffeeScript form when using Closure (though note that you would have to use <code>superClass_</code> instead of <code>__super__</code>) is that the CoffeeScript call is more bytes of code. Unfortunately, the Closure Compiler does not know that <code>Bar.superClass_.constructor</code> is equivalent to <code>Foo</code>, so it does not rewrite it as such, though such logic could be added to the Compiler.<br />
<br />
This piqued my curiosity about how <code>goog.base()</code> is handled by the Compiler, so I ended up taking a much deeper look at <code>goog.base()</code> than I ever had before. I got so caught up in it that I ended up composing a new essay on what I learned: <a href="http://bolinfest.com/essays/googbase.html">"An Examination of goog.base()."</a><br />
<br />
The upshot of all this is that in my CoffeeScript-to-Closure translation code, I am not going to translate any of CoffeeScript's <code>super()</code> calls into <code>goog.base()</code> calls because avoiding <code>goog.base()</code> eliminates a couple of issues. I will still use <code>goog.base()</code> when writing Closure code by hand, but if Closure code is being autogenerated anyway, then using <code>goog.base()</code> is not as compelling.<br />
<br />
Finally, if you're wondering why I started this project a few weeks ago and have not made any progress on the code since then, it is because I got married and went on a honeymoon, so at least my wife and I would consider that a pretty good excuse!<br />
<br />
<em>Want to learn more about Closure? Pick up a copy of my new book, <a href="http://www.amazon.com/gp/product/1449381871?ie=UTF8&tag=bolinfestcom-20&link_code=as3&camp=211189&creative=373489&creativeASIN=1449381871">Closure: The Definitive Guide (O'Reilly)</a>, and learn how to build sophisticated web applications like Gmail and Google Maps!</em>Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com6tag:blogger.com,1999:blog-2246259935483101149.post-49604937797540106992011-07-01T16:10:00.000-04:002011-07-01T16:10:08.006-04:00Writing useful JavaScript applications in less than half the size of jQueryNot too long ago, I tried to <a href="http://blog.bolinfest.com/2011/04/jquerycom-uses-only-34-of-jquery.html">bring attention to how little of the jQuery library many developers actually use</a> and argue that frontend developers should consider what sort of improvements their users would see if they could compile their code with the <a href="http://code.google.com/closure/compiler/docs/api-tutorial3.html">Advanced mode of the Closure Compiler</a>. Today I would like to further that argument by taking a look at <a href="https://chrome.google.com/webstore/detail/ifacldneidndihdbgfkglegmjohkplme">TargetAlert</a>, my browser extension that I <a href="http://blog.bolinfest.com/2011/06/triumphant-return-of-targetalert.html">re-released this week for Chrome</a>.<br />
<br />
TargetAlert is built using <a href="http://code.google.com/closure/">Closure</a> and <a href="http://www.plovr.com/">plovr</a> using the <a href="http://code.google.com/p/closure-chrome-extension-template/">template I created for developing a Chrome extension</a>. The packaged version of the extension includes three JavaScript files:<br />
<style>
.targetalert-js-stats {
border-collapse: collapse;
}
.targetalert-js-stats td, .targetalert-js-stats th {
border: 1px solid #CCC;
}
</style><br />
<table border="1" class="targetalert-js-stats" cellpadding="3"> <tr>
<th>Name</th>
<th>Size (bytes)</th>
<th>Description</th>
</tr>
<tr>
<td><code>targetalert.js</code></td>
<td style="text-align: right">19475</td>
<td>content script that runs on every page</td>
</tr>
<tr>
<td><code>options.js</code></td>
<td style="text-align: right">19569</td>
<td>logic for the TargetAlert options page</td>
</tr>
<tr>
<td><code>targetalert.js</code></td>
<td style="text-align: right">3590</td>
<td><a href="http://code.google.com/chrome/extensions/background_pages.html">background page</a> that channels information from the options to the content script</td>
</tr>
<tr>
<td>Total</td>
<td style="text-align: right">42634</td>
<td> </td>
</tr>
</table><br />
By comparison, <a href="https://ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.min.js">the minified version of jQuery 1.6</a> is 91342 bytes, which is <em>more than twice the size of the code for TargetAlert</em>. (The gzipped sizes are 14488 vs. 31953, so the relative sizes are the same, even when gzipped.)<br />
<br />
And to put things in perspective, here is the set of <code>goog.require()</code> statements that appear in TargetAlert code, which reflects the extent of its dependencies:<br />
<pre>goog.require('goog.array');
goog.require('goog.dispose');
goog.require('goog.dom');
goog.require('goog.dom.NodeIterator');
goog.require('goog.dom.NodeType');
goog.require('goog.dom.TagName');
goog.require('goog.events');
goog.require('goog.events.EventType');
goog.require('goog.string');
goog.require('goog.ui.Component');
goog.require('goog.Uri');
goog.require('soy');
</pre>I include this to demonstrate that there was no effort to re-implement parts of the Closure Library in order to save bytes. On the contrary, one of the primary reasons to use Closure is that you can write code in a natural, more readable way (which may be slightly more verbose), and make the Compiler responsible for minification. Although competitions like <a href="http://js1k.com/">JS1K</a> are fun and I'm amazed to see how small JS can get when it is hand-optimized, even the first winner of JS1K, <a href="http://www.amazon.com/gp/product/1593272820/ref=as_li_qf_sp_asin_tl?ie=UTF8&tag=bolinfestcom-20&linkCode=as2&camp=217145&creative=399369&creativeASIN=1593272820">Marijn Haverbeke</a>, admits, <a href="http://marijnhaverbeke.nl/js1k/">"In terms of productivity, this is an awful way of coding."</a><br />
<br />
When deploying a packaged browser extension, there are no caching benefits to consider: if your extension includes a copy of jQuery, then it adds an extra 30K to the user's download, even when gzipped. To avoid this, your extension could reference <a href="https://ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.min.js">https://ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.min.js</a> (or equivalent) from its code, but then it may not work when offline. Bear in mind that in some parts of the world (including the US! think about data plans for tablets), users have quotas for how much data they download, so you're helping them save a little money if you can package your resources more efficiently.<br />
<br />
Further, if your browser extension has a content script that runs on every page, keeping your JS small reduces the amount of code that will be executed on every page load, minimizing the impact of your extension on the user's browsing experience. As users may have many extensions installed, if everyone starts including an extra 30K of JS, then this additional tax can really start to add up! Maybe it's time you gave Closure a good look, if you haven't already.<br />
<br />
<em>Want to learn more about Closure? Pick up a copy of my new book, <a href="http://www.amazon.com/gp/product/1449381871?ie=UTF8&tag=bolinfestcom-20&link_code=as3&camp=211189&creative=373489&creativeASIN=1449381871">Closure: The Definitive Guide (O'Reilly)</a>, and learn how to build sophisticated web applications like Gmail and Google Maps!</em>Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com28tag:blogger.com,1999:blog-2246259935483101149.post-53330823088538017982011-06-27T22:52:00.000-04:002011-06-27T22:52:21.312-04:00The Triumphant Return of TargetAlert!About seven years ago, my <a href="http://people.csail.mit.edu/rcm/">adviser</a> and I were sitting in his office Googling things as part of research for my thesis. I can't remember what we were looking for, but just after we clicked on a promising search result, the Adobe splash screen popped up. As if on cue, we both let out a groan in unison as we waited for the PDF plugin to load. In that instant, it struck me that I could build a small Firefox extension to make browsing the Web just a little bit better.<br />
<br />
<a href="http://bolinfest.com/targetalert/version.html">Shortly thereafter</a>, I created TargetAlert: a browser extension that would warn you when you were about to click on a PDF. It used the simple heuristic of checking whether the link ended in <b>pdf</b>, and if so, it inserted a PDF icon at the end of the link <a href="http://bolinfest.com/targetalert/">as shown on the original TargetAlert home page</a>.<br />
<br />
And that was it! My problem was solved. Now I was able to avoid inadvertently starting up Adobe Reader as I browsed the Web.<br />
<br />
But then I realized that there were other things on the Web that were irritating, too! Specifically, links that opened in new tabs without warning or those that started up Microsoft Office. Within a week, I added alerts for those types of links, as well.<br />
<br />
After adding those features, I should have been content with TargetAlert as it was and put it aside to focus on <a href="http://groups.csail.mit.edu/uid/chickenfoot/index.php">my thesis</a>, but then something incredible happened: <a href="http://tech.slashdot.org/story/04/09/06/1228202/Exploring-Firefox-Extensions">I was Slashdotted</a>! Suddenly, I had a lot more traffic to my site and many more users of TargetAlert, and I did not want to disappoint them, so I added a few more features and updated the web site. Bug reports came in (which I recorded), but it was my last year at MIT, and I was busy interviewing and <a href="http://bolinfest.com/6.170/">TAing</a> on top of my coursework and research, so updates to TargetAlert were sporadic after that. It wasn't until the summer between graduation and starting at Google that I had time to dig into TargetAlert again.<br />
<br />
Though the primary reason that TargetAlert development slowed is that <em>Firefox extension development should have been fun, but it wasn't</em>. At the time, every time you made a change to your extension, you had to restart Firefox to pick up the change. As you can imagine, that made for a slow edit-reload-test cycle, inhibiting progress. Also, instead of using simple web technologies like HTML and JSON, Firefox encouraged the use of more obscure things, such as XUL and RDF. The bulk of my energy was spent on getting information into and out of TargetAlert's preferences dialog (because I actually tried to use XUL and RDF, as recommended by Mozilla), whereas the fun part of the extension was taking the user's preferences and applying them to the page.<br />
<br />
The #1 requested feature for TargetAlert was for users to be able to define their own alerts (as it were, users could only enable or disable the alerts that were built into TargetAlert). Conceptually, this was not a difficult problem, but realizing the solution in XUL and RDF was an incredible pain. As TargetAlert didn't generate any revenue and I had other personal projects (and <a href="http://googleblog.blogspot.com/2006/04/its-about-time.html">work projects</a>!) that were more interesting to me, I never got around to satisfying this feature request.<br />
<br />
Fast-forward to 2011 when I finally <a href="http://twitter.com/#!/bolinfest/status/81162738306519040">decommissioned a VPS</a> that I had been paying for since 2003. Even though I had rerouted all of its traffic to a new machine years ago and it was costing me money to keep it around, I put off taking it down because I knew that I needed to block out some time to get all of the important data off of it first, which included the original CVS repository for TargetAlert.<br />
<br />
As part of the data migration, I converted all of my CVS repositories to SVN and then to Hg, preserving all of the version history (it should have been possible to convert from CVS to Hg directly, but I couldn't get <code>hg convert</code> to work with CVS). Once I had all of my code from MIT in a modern version control system, I started poking around to see which projects would still build and run. It turns out that I have been a stickler for creating <code>build.xml</code> files for personal projects for quite some time, so I was able to compile more code than I would have expected!<br />
<br />
But then I took a look at TargetAlert. The JavaScript that I wrote in 2004 and 2005 looks gross compared to <a href="http://www.amazon.com/gp/product/1449381871?ie=UTF8&tag=bolinfestcom-20&link_code=as3&camp=211189&creative=373489&creativeASIN=1449381871">the way I write JavaScript now</a>. It's not even that it was totally disorganized -- it's just that I had been trying to figure out what the best practices were for Firefox/JavaScript development at the time, and they just didn't exist yet.<br />
<br />
Further, TargetAlert worked on pre-Firefox 1.0 releases through Firefox 2.0, so the code is full of hacks to make it work on those old versions of the browser that are now irrelevant. Oh, and what about XUL? Well, my go-to resource for XUL back in the day was <a href="http://www.xulplanet.com/">xulplanet.com</a>, though the site owners have decided to shut it down, which made making sense of that old code even more discouraging. Once again, digging into Firefox extension development to get TargetAlert to work on Firefox 4.0 did not appear to be much fun.<br />
<br />
Recently, I have been much more interested in building <a href="http://code.google.com/chrome/extensions/docs.html">Chrome apps and extensions</a> (Chrome is my primary browser, and unlike most people, I sincerely enjoy using a <a href="http://en.wikipedia.org/wiki/Google_Chrome_OS#Cr-48_prototype_hardware">Cr-48</a>), so I decided to port TargetAlert to Chrome. This turned out to be a fun project, especially because it forced me to touch a <a href="http://code.google.com/chrome/extensions/content_scripts.html">number</a> <a href="http://code.google.com/chrome/extensions/background_pages.html">of</a> <a href="http://code.google.com/chrome/extensions/messaging.html">features</a> of the Chrome API, so I ended up reading almost all of the documentation to get a complete view of what the API has to offer (hooray learning!).<br />
<br />
Compared to Firefox, the API for Chrome extension development seems much better designed and documented. Though to be fair, I don't believe that Chrome's API would be this good if it weren't able to leverage so many of the lessons learned from years of Firefox extension development. For example, <a href="https://addons.mozilla.org/en-US/firefox/addon/greasemonkey/">Greasemonkey</a> saw considerable success as a Firefox extension, which made it obvious that Chrome should make <a href="http://code.google.com/chrome/extensions/content_scripts.html">content scripts</a> an explicit part of its API. (It doesn't hurt that the creator of Greasemonkey, <a href="http://www.aaronboodman.com/">Aaron Boodman</a>, works on Chrome.) Also, where Firefox uses a <a href="https://developer.mozilla.org/en/Chrome_Registration">wacky, custom manifest file format</a> for one metadata file and an <a href="https://developer.mozilla.org/en/install_manifests">ugly ass RDF format</a> for another metadata file, Chrome uses <a href="http://code.google.com/chrome/extensions/manifest.html">a single JSON file</a>, which is a format that all web developers understand. (Though admittedly, having recently spent a bit of time with <code>manifest.json</code> files for Chrome, I feel that the need for my <a href="http://bolinfest.com/essays/json.html">suggested improvements to JSON</a> is even more compelling.)<br />
<br />
As TargetAlert <a href="https://chrome.google.com/webstore/detail/nofjbmcclkndhmnegkhkbakjgakchepg">was not the first Chrome extension I had developed</a>, I already had some idea of how I would structure my new extension. I knew that I wanted to use both <a href="http://code.google.com/closure/">Closure</a> and <a href="http://plovr.com/">plovr</a> for development, which meant that there would be a quick build step so that I could benefit from the static checking of the Closure Compiler. Although changes to Chrome extensions do not require a restart to pick up any changes, they do often require navigating to <code>chrome://extensions</code> and clicking the <b>Reload</b> button for your extension. I decided that I wanted to eliminate that step, so I created a <a href="http://code.google.com/p/closure-chrome-extension-template/">template for a Chrome extension that uses plovr</a> in order to reduce the length of the edit-reload-test cycle. This enabled me to make fast progress and finally made extension development fun again! (The <a href="http://code.google.com/p/closure-chrome-extension-template/source/browse/README.txt">README</a> file for the template project has the details on how to use it to get up and running quickly.)<br />
<br />
I used the original code for TargetAlert as a guide (it had some workarounds for web page quirks that I wanted to make sure made it to the new version), and within a day, I had a <a href="https://chrome.google.com/webstore/detail/ifacldneidndihdbgfkglegmjohkplme">new version of TargetAlert for Chrome</a>! It had the majority of the features of the original TargetAlert (as well as some bug fixes), and I felt like I could finally check "resurrect TargetAlert" off of my list.<br />
<br />
Except I couldn't.<br />
<br />
A week after the release of my Chrome extension, I only had eight users according to my Chrome developer dashboard. Back in the day, TargetAlert had tens of thousands of users! This made me sad, so I decided that it was finally time to make the Chrome version of TargetAlert <em>better</em> than the original Firefox verison: I was finally going to support user-defined alerts! Once I actually sat myself down to do the work, it was not very difficult at all. Because Chrome extensions have explicit support for an options page in HTML/JS/CSS that has access to <code>localStorage</code>, building a UI that could read and write preferences was a problem that I have solved many times before. Further, being able to inspect and edit the <code>localStorage</code> object from the JavaScript console in Chrome was much more pleasant than mucking with user preferences in <code>about:config</code> in Firefox ever was.<br />
<br />
So after years of feature requests, Target Alert 0.6 for Chrome is my gift to you. Please <a href="https://chrome.google.com/webstore/detail/ifacldneidndihdbgfkglegmjohkplme">install it</a> and try it out! With the exception of the lack of translations in the Chrome version of TargetAlert (the Firefox version <a href="http://bolinfest.com/targetalert/translation.html">had a dozen</a>), the Chrome version is a significant improvement over the Firefox one: it's faster, supports user-defined alerts, and with the seven years of Web development experience that I've gained since the original, I fixed a number of bugs, too.<br />
<br />
<em>Want to learn more about web development and Closure? Pick up a copy of my new book, <a href="http://www.amazon.com/gp/product/1449381871?ie=UTF8&tag=bolinfestcom-20&link_code=as3&camp=211189&creative=373489&creativeASIN=1449381871">Closure: The Definitive Guide (O'Reilly)</a>, and learn how to build sophisticated web applications like Gmail and Google Maps!</em>Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com2tag:blogger.com,1999:blog-2246259935483101149.post-63032367806075006352011-06-09T14:00:00.186-04:002011-06-09T15:36:39.876-04:00It takes a village to build a Linux desktop<b>tl;dr</b> Instead of trying to build your own Ubuntu PC, check out a site like <a href="http://www.system76.com/">system76.com</a> instead.<br />
<p><em>In many ways, this post is more for me than for you—I want to make sure I re-read this the next time I am choosing a desktop computer.</em><br />
<p>My Windows XP desktop from March 2007 finally died, so it was time for me to put together a new desktop for development. Because <a href="http://blog.bolinfest.com/2010/12/i-got-new-computer-for-christmas-sorta.html">Ubuntu on my laptop</a> had been working out so well, I decided that I would make my new machine an Ubuntu box, too. Historically, it was convenient to have a native Windows machine to test IE, Firefox, Chrome, Safari, and Opera, but Cygwin is a far cry from GNOME Terminal, so using Windows as my primary desktop environment had not been working out so well for me.<br />
<p>As a developer, I realize that my computer needs are different from an ordinary person's, but I didn't expect it would be so difficult to buy the type of computer that I wanted on the cheap (<$1000). Specifically, I was looking for:<br />
<ul><li>At least 8GB of RAM.<br />
<li>A 128GB solid state drive. (I would have been happy with 64GB because this machine is for development, not storing media, but <a href="http://www.codinghorror.com/blog/2010/09/revisiting-solid-state-hard-drives.html">Jeff Atwood</a> convinced me to go for 128GB, anyway.)<br />
<li>A video card that can drive two 24" vertical monitors (I still have the two that I used with my XP machine). Ideally, the card would also be able to power a third 24" if I got one at some point.<br />
<li>A decent processor and motherboard.<br />
</ul>I also wanted to avoid paying for: <ul><li>A Windows license.<br />
<li>A CD/DVD-ROM drive.<br />
<li>Frivolous things.<br />
</ul>I did not think that I would need a CD-ROM drive, as I planned to install Ubuntu from a USB flash drive. <p>I expected to be able to go to Dell or HP's web site and customize something without much difficulty. Was I ever wrong. At first, I thought it was going to be easy as the first result for <code>[Dell Ubuntu]</code> <a href="http://www.dell.com/content/topics/segtopic.aspx/linux_3x?c=us&l=en&cs=19">looked very promising</a>: it showed a tower starting at $650 with Ubuntu preinstalled. I started to customize it: upgrading from 4GB to 8GB of RAM increased the price by $120, which was reasonable (though <a href="http://www.amazon.com/gp/product/B003ZDJ42O/ref=as_li_tf_tl?ie=UTF8&tag=bolinfestcom-20&linkCode=as2&camp=217153&creative=399349&creativeASIN=B003ZDJ42O">not quite as good as Amazon</a>). However, I could not find an option to upgrade to an SSD, so buying my own off of Newegg would cost me $240. Finally, the only options Dell offered for video cards were ATI, and I have had some horrible experiences trying to get dual monitors to work with Ubuntu and ATI cards in the past (NVIDIA seems to be better about providing good Linux drivers). At this point, I was over $1000, and was not so sure about the video card, so I started asking some friends for their input. <p>Unfortunately, I have smart, capable friends who can build their own machines from parts who were able to convince me that I could, too. You see, in general, I <em>hate</em> dealing with hardware. For me, hardware is simply an inevitable requirement for software. When software goes wrong, I have some chance of debugging it and can attack the problem right away. By comparison, when hardware goes wrong, I am less capable, and may have to wait until a new part from Amazon comes in before I can continue debugging the problem, which sucks. <p>At the same time, I realized that I should probably get past my aversion to dealing with hardware, so I started searching for blog posts about people who had built their own Ubuntu boxes. I found <a href="http://www.saltycrane.com/blog/2011/02/decided-build-linux-desktop-pc-myself/">one post</a> by a guy who built his PC for $388.95, which was far more than the Dell that I was looking at! Further, he itemized the parts that he bought, so at least I knew that if I followed his steps, I would end up with something that worked with Ubuntu (ending up with hardware that was not supported by Ubuntu was one of my biggest fears during this project). I cross-checked this list with a friend who had recently put together a Linux machine with an <a href="http://www.amazon.com/gp/product/B002A6G3V2/ref=as_li_tf_tl?ie=UTF8&tag=bolinfestcom-20&linkCode=as2&camp=217153&creative=399349&creativeASIN=B002A6G3V2">Intel i7</a> chip, and he was really happy with it, so I ended up buying it and the <a href="http://www.amazon.com/gp/product/B001ISJONM/ref=as_li_tf_tl?ie=UTF8&tag=bolinfestcom-20&linkCode=as2&camp=217153&creative=399349&creativeASIN=B001ISJONM">DX58SO motherboard</a> that was recommended for use with the i7. This made things a bit pricier than they were in the blog post I originally looked at:<style>
.parts {
border-collapse: collapse;
}
.parts td, .parts th {
vertical-align: top;
border: 1px solid #666;
padding: 2px;
}
</style> <table class="parts"><tr> <td>Motherboard</td> <td><a href="http://www.amazon.com/gp/product/B001ISJONM/ref=as_li_tf_tl?ie=UTF8&tag=bolinfestcom-20&linkCode=as2&camp=217153&creative=399349&creativeASIN=B001ISJONM">Intel DX58SO Extreme Series X58 ATX Triple-channel DDR3 16GB SLI</a><br />
</td> <td>$269.99</td> </tr>
<tr> <td>CPU</td> <td><a href="http://www.amazon.com/gp/product/B002A6G3V2/ref=as_li_tf_tl?ie=UTF8&tag=bolinfestcom-20&linkCode=as2&camp=217153&creative=399349&creativeASIN=B002A6G3V2">Intel Core i7 950 3.06GHz 8M L3 Cache LGA1366 Desktop Processor</a><br />
</td> <td>$234.99</td> </tr>
<tr> <td>RAM</td> <td><a href="http://www.amazon.com/gp/product/B003ZDJ42O/ref=as_li_tf_tl?ie=UTF8&tag=bolinfestcom-20&linkCode=as2&camp=217153&creative=399349&creativeASIN=B003ZDJ42O">Corsair XMS3 4 GB 1333MHz PC3-10666 240-pin DDR3 Memory Kit for Intel Core i3 i5 i7 and AMD CMX4GX3M1A1333C9</a><br />
</td> <td><nobr>(2x$39.99) $79.98</nobr><br />
</td> </tr>
<tr> <td>Case</td> <td><a href="http://www.amazon.com/gp/product/B001TUYTZ2/ref=as_li_tf_tl?ie=UTF8&tag=bolinfestcom-20&linkCode=as2&camp=217153&creative=399349&creativeASIN=B001TUYTZ2">Cooler Master Elite 360 RC-360-KKN1-GP ATX Mid Tower/Desktop Case (Black)</a><br />
</td> <td>$39.99<br />
</td> </tr>
<tr> <td>Hard Drive</td> <td><a href="http://www.amazon.com/gp/product/B0039SM0AS/ref=as_li_tf_tl?ie=UTF8&tag=bolinfestcom-20&linkCode=as2&camp=217153&creative=399349&creativeASIN=B0039SM0AS">Crucial Technology 128 GB Crucial RealSSD C300 Series Solid State Drive CTFDDAC128MAG-1G1</a><br />
</td> <td>$237.49<br />
</td> </tr>
<tr> <td>Power Supply</td> <td><a href="http://www.amazon.com/gp/product/B0029F21LK/ref=as_li_tf_tl?ie=UTF8&tag=bolinfestcom-20&linkCode=as2&camp=217153&creative=399349&creativeASIN=B0029F21LK">Corsair CMPSU-750HX 750-Watt HX Professional Series 80 Plus Certified Power Supply compatible with Core i7 and Core i5</a><br />
</td> <td>$109.99<br />
</td> </tr>
<tr> <td></td> <td>Sales Tax</td> <td>$55.47</td> </tr>
<tr> <td></td> <th>Total</th> <td>$1027.90</td> </tr>
</table><p>At this point, I should have acknowledged that what I had put together (on paper) was now in the price range of what I was originally looking at on Dell's web site. Unfortunately, I had mentally committed to being a badass and building a machine at this point, so I forged ahead with my purchase. <p>I also should have acknowledged that this list of parts did not include a video card... <p>In a few days, everything had arrived and I started putting everything together as best as I could figure out. I tried following the assembly instructions verbatim from the manuals, but that proved to be a huge mistake, as the suggested assembly order was not appropriate for my parts. For example, the case instructions recommended that I install the power supply first, then the motherboard, though that ended up making the SATA connectors inaccessible, so I had to remove the motherboard, then the power supply, plug in the SATA cables, and then put everything back together again. (This is one of many examples of exercises like this that I went through.) <p>When I was close to having something that I thought would boot, I finally accepted the fact that I had failed to order a video card, so I tried using the one from my XP machine in hopes that it would allow me to kick off the Ubuntu installation process, and then I could walk to Best Buy to purchase a new video card. In the months leading up to the death of my XP machine, I had a lot of problems with my monitors, so it should have been no surprise that my installation screen looked like this: <p><img src="http://bolinfest.com/images/badvideocard.jpg" width="600"> <p>Regardless, this allowed me to run the memory test (which runs forever, by the way—I let this run for hours before I decided to investigate why it never stopped) while I went off to Best Buy. Because I had already convinced myself that an NVIDIA card would work better with Ubuntu than an ATI, I went ahead and bought this badass looking video card: the <a href="http://www.amazon.com/gp/product/B004R9OVE6/ref=as_li_tf_tl?ie=UTF8&tag=bolinfestcom-20&linkCode=as2&camp=217153&creative=399701&creativeASIN=B004R9OVE6">EVGA GeForce GTX 550 Ti</a>. It was <em>not</em> cheap ($250 at Best Buy, though it would have been $125 on Amazon), but I was a man on a mission, so nothing could stop me. <p>Once I got home and dropped the card in, I had a problem that I never would have anticipated: <em>the video card did not fit in my case</em>. Specifically, the card fit in my case, but it was so close to the power supply that there was not enough room to plug in the separate power connector that the video card needed. At that point, I was both desperate and determined, so I took out the power supply and tried to wrench off the casing to create a big enough hole so there would be enough room behind the video card to plug it in. As you can see, I did succeed in removing quite a bit of metal from the power supply (and most definitely voided the warranty): <p><img src="http://bolinfest.com/images/powersupplymetal.jpg" width="600"> <p>Despite my handiwork with a pair of pliers, I was unable to remove enough metal from the power supply to create enough space to power the video card, so it would have to go back to Best Buy. I decided to search for <code>[ubuntu 11.04 best video card]</code> to find something that I could overnight from Amazon. I followed the links from <a href="http://blog.sudobits.com/2011/05/29/best-graphics-card-for-ubuntu-11-04-10-10/">this blog post</a>, and decided to go with the <a href="http://www.amazon.com/gp/product/B003DTMLZW/ref=as_li_tf_tl?ie=UTF8&tag=bolinfestcom-20&linkCode=as2&camp=217153&creative=399349&creativeASIN=B003DTMLZW">ZOTAC nVidia GeForce 9800GT</a>, which was $102.32 after tax and overnight shipping. One of the main selling points for me was the following comment in one of the Amazon reviews: "Another advantage of this card is that it DOES NOT require your power supply to have a video card power connector." Although I was originally hoping to get a card with two DVI inputs (instead of one DVI and one VGA), I really just wanted something that would work at that point. <p>While I was waiting for the new video card to arrive, I tried installing Linux on my SSD with my half-assed video card. Although it seemed like it was going to install off of the USB flash drive on Day 1, my PC did not seem to want to accept it on Day 2. I spent some time Googling for "@ogt error" that morning (because that is what I saw on my screen), until I realized that it was actually saying "boot error" and my video card was just garbling the characters. I rewrote the USB drive with all sorts of different ISOs, and I started to wonder whether buying the cheapest flash drive at Best Buy (it was $8 for 4GB!) was a mistake. I then tried the USB drive on another Windows machine I had lying around, and it took, at which point I was really stumped. Again, I asked some friends what they thought, and they recommended installing from a CD, as that was much more reliable. <p>As you may recall, buying a CD-ROM drive was something I had avoided, so what could I do? I tried reusing the one from my old Dell, but that turned out to be a non-starter because it required an IDE connector rather than a SATA one. Instead, I hoofed it back to Best Buy to "borrow" a drive for 24 hours or so. This was one of my better decisions during this entire process, as installing from a CD-R that I burned with an 11.04 ISO worked flawlessly. <p>Once my video card finally came in and Ubuntu was installed, I finally felt like I was almost done! Of course, I was wrong. Getting two 24" monitors working in portrait mode turned out to be quite a challenge. The first step was to install the NVIDIA drivers for Ubuntu. Originally I downloaded binaries from the web site, but that broke everything. Fortunately a friend helped me figure out how to uninstall the binary driver (<code>sudo ./NVIDIA-Linux-x86_64-270.41.19.run --uninstall</code>) and replace it with a proper package (<code>sudo apt-get install nvidia-current</code>). Now things were working fine with one monitor in landscape mode, but the jump to portrait was more challenging. <p>Initially, I tried to do everything through the NVIDIA GUI on Ubuntu, but it did not present an option to rotate the monitor. I found <a href="http://www.chrisamiller.com/blog/2008/05/11/rotating-one-monitor-with-ubuntu/">a blog post</a> that recommended adding <code>Option "Rotate" "CCW"</code> to <code>/etc/X11/xorg.conf</code>, which indeed helped me get the first monitor working in portrait mode. I was able to add the second monitor via the NVIDIA GUI and edited <code>xorg.conf</code> again to rotate it. At this point, everything looked great <em>except</em> that I could not drag windows from one monitor to another. To do that, I had to enable "TwinView" in the NVIDIA GUI, which did enable me to drag windows across screens, except NVIDIA insisted that the cursor flow from the bottom of the left monitor to the top of the right monitor instead of horizontally across. I did many Google searches to try to find a simple solution, but I had no luck. Ultimately, I ended up reading up on <code>xorg.conf</code> until I understood it enough to edit it by hand to get things to work. At long last, everything was working! <p>The final step was to get everything in the case. This was a little tricky because my power supply came with a ton of cables, and wedging them in such that they did not block any of the three exposed fans was non-trivial. Further, the case did not have a proper slot for an SSD, so I ended up buying a <a href="http://www.amazon.com/gp/product/B002BH3Z8E/ref=as_li_tf_tl?ie=UTF8&tag=bolinfestcom-20&linkCode=as2&camp=217153&creative=399349&creativeASIN=B002BH3Z8E">3.5 to 2 X 2.5-Inch Bay Converter</a> to hold the SSD in place. Unfortunately, the case had weird screw holes such that it was impossible to secure the converter in place, but fortunately the top fan prevents the bay from falling into the rest of the case, so it seems good enough. Considering that I already have a power supply with a gaping hole in it and a mess of cables, this did not seem like my biggest concern. <p>So what have I learned? Primarily, I learned that I should never do this again, but if I had to, I would be much less afraid to mess with hardware the next time around. Including the cost of the video card and the bay converter, I spent $1138.92 in cash and about two days worth of my time. Most of that two days was spent being angry and frustrated. When I was just about finished with the entire project, I noticed an ad on one of the blog posts I used for help to a site I had not heard of before: <a href="http://www.system76.com/">system76.com</a>. Apparently they sell Ubuntu laptops, desktops, and servers, and they have a product line called <a href="http://www.system76.com/product_info.php?cPath=27&products_id=98">Wildebeest Performance</a> that I could customize to basically exactly what I said I wanted at the outset of this project. On system76.com, a machine with Ubuntu 11.04, an i7 processor, 8GB of RAM, a 120GB SSD, and an NVIDIA card with two inputs (one DVI, one VGA) costs $1117.00, which is less than what I paid when buying the parts individually. Obviously buying the machine directly would have been a huge time savings, and I'm sure the inside of the system76 PC would not be nearly as sloppy as mine. In this, and in many other things, I need to have more patience and do more research before diving into a project. It can save a lot of time and sanity in the long run.Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com2tag:blogger.com,1999:blog-2246259935483101149.post-7272612439811474162011-05-27T15:49:00.000-04:002011-05-27T15:49:12.566-04:00GData: I can't take it anymoreI have been playing with GData since I was on the Google Calendar team back in 2005/2006. My experiences with GData can be summarized by the following graph:<br />
<br />
<img src="http://bolinfest.com/images/gdata_600.jpg" width="600" height="493"><br />
<br />
There are many reasons why GData continues to infuriate me:<br />
<ul> <li>GData is not about data—it is an exercise in XML masturbation. If you look at the content of a GData feed, most of the bytes are dedicated to crap that does not matter due to a blind devotion to the Atom Publishing Protocol. In recent history, GData has become better in providing clean JSON data, but the equivalent XML it sends down is still <a href="http://code.google.com/apis/latitude/v1/using_rest.html#LocationResource">horrifying by comparison</a>. I understand the motivation to provide an XML wire format, but at least the original Facebook API had the decency to use <a href="http://en.wikipedia.org/wiki/Plain_Old_XML">POX</a> to reduce the clutter. Atom is the reason why, while I was at Google, the <a href="http://googledataapis.blogspot.com/2006/11/calling-all-web-hackers-json-support.html">first GData JSON API we released</a> had to have a bunch of dollar signs and crap in it, so that you could use it to construct the Atom XML equivalent when making a request to the server. When I wanted to add <a href="http://googleblog.blogspot.com/2006/09/google-calendar-does-something-about.html">web content events</a> to Calendar in 2006, most of my energy was spent on debating what the <a href="http://www.google.com/support/calendar/bin/answer.py?answer=48528">semantically correct Atom representation should be</a> rather than implementing the feature. Hitching GData to the Atom wagon was an exhausting waste of time and energy. Is it really that important to enable users to view their updates to a spreadsheet in Google Reader?<br />
<li>REST APIs aren't cool. <a href="http://dev.twitter.com/pages/streaming_api">Streaming APIs</a> are cool. If we want to have a real time Web, then we need streaming APIs. PubSub can be used as a stopgap, but it's not as slick. For data that may change frequently (such as a user's location in Google Latitude), you have no choice but to poll aggressively using a REST API.<br />
<li>GData has traditionally given JavaScript developers short shrift. Look at the API support for the GData <a href="http://code.google.com/p/gdata-java-client/">Java client library</a> or <a href="http://code.google.com/p/gdata-python-client/">Python client library</a> compared to the <a href="http://code.google.com/p/gdata-javascript-client/">JavaScript client library</a>. JavaScript is the <i>lingua franca</i> of the Web: give it the attention it deserves.<br />
<li>The notion of Atom forces the idea of "feeds" and "entries." That is fine for something like Google Calendar, but is less appropriate for hierarchical data, such as that stored in Google Tasks. Further, for data that does not naturally split into "entries," such as a Google Doc, the entire document becomes a single entry. Therefore, making a minor change to a Google Doc via GData requires <em>uploading the entire document rather than the diff</em>. This is quite expensive if you want to create your own editor for a Google Doc that has autosave.<br />
<li>Perhaps the biggest time sink when getting started with GData is wrapping your head around the authentication protocols. To play around with your data, the first thing you have to do is set up a bunch of crap to get an AuthSub token. <b>Why can't I just fill out a form on code.google.com and give myself one?</b> Setting up AuthSub is not the compelling piece of the application I want to build—interacting with my data is. Let me play with my data first and build a prototype so I can determine if what I'm trying to build is worth sharing with others and productionizing, and then I'll worry about authentication. <a href="http://developers.facebook.com/docs/reference/javascript/">Facebook's JavaScript SDK</a> does this exceptionally well. After registering your application, you can include one <code><script></code> tag on your page and start using the Facebook API <em>without writing any server code</em>. It's much more fun and makes it easier to focus on the interesting part of your app.<br />
</ul>If GData were great, then Google products would be built on top of GData. A quick look under the hood will reveal that no serious web application at Google (Gmail, Calendar, Docs, etc.) uses it. If GData isn't good enough for Google engineers, then why should we be using it?Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com39tag:blogger.com,1999:blog-2246259935483101149.post-2298974329223296282011-05-16T15:13:00.001-04:002011-05-16T15:15:15.601-04:00Reflecting on my Google I/O 2011 TalkThis year was my first trip to Google I/O, both as an attendee and a speaker. The title of my talk was <a href="http://www.google.com/events/io/2011/sessions/javascript-programming-in-the-large-with-closure-tools.html">JavaScript Programming in the Large with Closure Tools</a>. As I have spoken to more and more developers, I have come to appreciate how jQuery and Closure are good for different things, and that Closure's true strength is when working in large JavaScript codebases, particularly for <a href="http://en.wikipedia.org/wiki/Single-page_application">SPAs</a>. With the increased interest in HTML5 and offline applications, I believe that JavaScript codebases will continue to grow, and that Closure will become even more important as we move forward, which is why I was eager to deliver my talk at I/O.<br />
<br />
Although it does not appear to be linked from the <a href="http://www.google.com/events/io/2011/sessions.html">sessions page</a> yet, the video of my I/O talk <a href="http://www.youtube.com/user/GoogleDevelopers#p/u/11/M3uWx-fhjUc">is available on YouTube</a>. I have also made my <a href="http://bolinfest.com/talks/2011/googleio/">slides available online</a>, though I made a concerted effort to put less text on the slides than normal, so they may not make as much sense without the narration from my talk.<br />
<br />
I was incredibly nervous, but I did watch all 57 minutes of the video to try to evaluate myself as a speaker. After observing myself, I'm actually quite happy with how things went! I was already aware that I sometimes list back and forth when speaking, and that's still a problem (fortunately, most of the video shows the slides, not me, so you may not be able to tell how bad my nervous habit is). My mumbling isn't as bad as it used to be (historically, I've been pretty bad about it during ordinary conversation, so mumbling during public speaking was far worse). It appears that when I'm diffident about what I'm saying (such as when I'm trying to make a joke that I'm not sure the audience will find funny), I often trail off at the end of the sentence, so I need to work on that. On the plus side, the word "like" appears to drop out of my vernacular when I step on a stage, and I knew my slides well enough that I was able to deliver all of the points I wanted to make without having to stare at them too much. (I never practiced the talk aloud before giving it—I only played through what I wanted to say in my head. I can't take myself seriously when I try to deliver my talk to myself in front of a mirror.)<br />
<br />
If you pay attention during the talk, you'll notice that I switch slides using a real Nintendo controller. The week before, I was in Portland for <a href="http://2011.jsconf.us/">JSConf</a>, which had an 8-bit theme. There, I gave a <a href="http://bolinfest.com/talks/2011/jsconf.us/">talk on a novel use of the <code>with</code> keyword in JavaScript</a>, but I never worked the 8-bit theme into my slides, so I decided to do so for Google I/O (you'll also note that Mario and Link make cameos in my I/O slides). Fortunately, I had messed around with my <a href="http://www.retrousb.com/product_info.php?cPath=21&products_id=28">USB NES RetroPort before</a>, so I already had some sample Java code to leverage—I ended up putting the whole NES navigation thing together the morning of my talk.<br />
<br />
For my <code>with</code> talk the week before, I had already created my own presentation viewer in Closure/JavaScript so I could leverage things like <a href="http://code.google.com/p/google-code-prettify/">prettify</a>. In order to provide an API to the NES controller, I exported some JavaScript functions to navigate the presentation forward and backward (<code>navPresoForward()</code> and <code>navPresoBack()</code>). Then I embedded the URL to the presentation in an <code>org.eclipse.swt.browser.Browser</code> and used <a href="http://sourceforge.net/projects/javajoystick/">com.centralnexus.input.Joystick</a> to process the input from the controller and convert right- and left-arrow presses into <code>browser.execute("navPresoForward()")</code> and <code>browser.execute("navPresoBack()")</code> calls in Java. (The one sticking point was discovering that joystick input had to be processed in a special thread scheduled by <a href="http://www.jdocs.com/swt/3.2/org/eclipse/swt/widgets/Display.html?m=M-asyncExec%28Runnable%29#M-asyncExec%28Runnable%29">Display.asyncExec()</a>.) Maybe it wasn't as cool as Marcin Wichary's Power Glove during his and Ryan's talk, <a href="http://www.google.com/events/io/2011/sessions/the-secrets-of-google-pac-man-a-game-show.html">The Secrets of Google Pac-Man: A Game Show</a>, but I thought theirs was the best talk of the entire conference, so they were tough to compete with.<br />
<br />
<em>Want to learn more about Closure? Pick up a copy of my new book, <a href="http://www.amazon.com/gp/product/1449381871?ie=UTF8&tag=bolinfestcom-20&link_code=as3&camp=211189&creative=373489&creativeASIN=1449381871">Closure: The Definitive Guide (O'Reilly)</a>, and learn how to build sophisticated web applications like Gmail and Google Maps!</em>Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com8tag:blogger.com,1999:blog-2246259935483101149.post-10188163102144608252011-04-20T09:27:00.000-04:002011-04-20T09:27:14.835-04:00jquery.com uses only 34% of jQueryI created a Firefox extension, <a href="https://addons.mozilla.org/en-US/firefox/addon/jsbloat/">JsBloat</a>, to help determine what fraction of the jQuery library a web page uses. It leverages <a href="http://siliconforks.com/jscoverage/">JSCoverage</a> to instrument jQuery and keep track of which lines of the library are executed (and how often). The table below is a small sample of sites that I tested with JsBloat. Here, "percentage used" means the fraction of lines of code that were executed in the version of the jQuery library that was loaded:<br />
<br />
<table border="1" cellpadding="4"><tbody>
<tr> <td>URL</td> <td>jQuery version</td> <td>% used on page load</td> <td>% used after mousing around</td> </tr>
<tr> <td><a href="http://jquery.com/">jquery.com</a></td> <td>1.4.2</td> <td>18%</td> <td>34%</td> </tr>
<tr> <td><a href="http://docs.jquery.com/Main_Page">docs.jquery.com/Main_Page</a></td> <td>1.4.4</td> <td>23%</td> <td>23%</td> </tr>
<tr> <td><a href="http://stackoverflow.com/">stackoverflow.com</a></td> <td>1.5.1</td> <td>30%</td> <td>33%</td> </tr>
<tr> <td><a href="http://www.mozilla.com/en-US/firefox/new/?from=getfirefox">getfirefox.com</a></td> <td>1.4.4</td> <td>19%</td> <td>23%</td> </tr>
</tbody></table><br />
Note that three of the four sites exercise new code paths as a result of mousing around the page, so they do not appear to be using pure CSS for their hover effects. For example, on jquery.com, a single mouseover of the "Lightweight Footprint" text causes a mouseover animation that increases the percentage of jQuery used by 11%! Also, when jQuery is loaded initially on stackoverflow.com, it calls <code>$()</code> 61 times, but after mousing around quite a bunch (which only increases the percentage of code used to 33%), the number of times that <code>$()</code> is executed jumps to 9875! (Your results may vary, depending on how many elements you mouse over, but it took less than twenty seconds of mousing for me to achieve my result. See the <b>Postscript</b> below to learn how to run JsBloat on any jQuery-powered page.) Although code coverage is admittedly a coarse metric for this sort of experiment, I still believe that the results are compelling.<br />
<br />
I decided to run this test because I was curious about how much jQuery users would stand to gain if they could leverage the <a href="http://code.google.com/closure/compiler/docs/api-tutorial3.html">Advanced mode of the Closure Compiler</a> to compile their code. JavaScript that is written for Advanced mode (such as the <a href="http://code.google.com/closure/library/">Closure Library</a>) can be compiled so that it is far smaller than its original source because the Closure Compiler will remove code that it determines is unreachable. Therefore, it will only include the lines of JavaScript code that you will actually use, whereas most clients of jQuery appear to be including much more than that.<br />
<br />
From these preliminary results, I believe that most sites that use jQuery could considerably reduce the amount of JavaScript that they serve by using Closure. As always, compiling custom JavaScript for every page must be weighed against caching benefits, though I suspect that the Compiler could find a healthy subset of jQuery that is universal to all pages on a particular site.<br />
<br />
If you're interested in learning more about using Closure to do magical things to your JavaScript, come find me at <a href="http://2011.jsconf.us/#!/schedule">Track B of JSConf</a> where I'm going to provide some mind-blowing examples of how to use the <code>with</code> keyword effectively! And if I don't see you at JSConf, then hopefully I'll see you at Google I/O where I'll be talking about <a href="http://www.google.com/events/io/2011/sessions.html">JavaScript Programming in the Large with Closure Tools</a>.<br />
<br />
<b>Postscript: If you're curious how JsBloat works...</b><br />
<br />
JsBloat works by intercepting requests to <a href="http://code.google.com/apis/libraries/devguide.html">ajax.googleapis.com</a> for jQuery and injecting its own instrumented version of jQuery into the response. The instrumented code looks something like:<pre>for (var i = 0, l = insert.length; (i < l); (i++)) {
_$jscoverage['jquery-1.5.2.js'][5517]++;
var elems = ((i > 0)? this.clone(true): this).get();
_$jscoverage['jquery-1.5.2.js'][5518]++;
(jQuery(insert[i])[original])(elems);
_$jscoverage['jquery-1.5.2.js'][5519]++;
ret = ret.concat(elems);
}
_$jscoverage['jquery-1.5.2.js'][5522]++;
return this.pushStack(ret, name, insert.selector);
</pre>so that after each line of code is executed, the <code>_$jscoverage</code> global increments its count for the (file, line number) pair. JSCoverage provides a simple HTML interface for inspecting this data, which JsBloat exposes.<br />
<br />
You can tell when JsBloat has intercepted a request because it injects a button into the upper-left-hand corner of the page, as shown below:<br />
<br />
<img src="http://bolinfest.com/javascript/jsbloat_before.png" width="561" width="321"><br />
<br />
Clicking on that button toggles the JSCoverage UI:<br />
<br />
<img src="http://bolinfest.com/javascript/jsbloat_after.png" width="561" width="321"><br />
<br />
Once JsBloat is installed, you can also configure it to intercept requests to any URL with jQuery. For example, <a href="http://getfirefox.com/">getfirefox.com</a> loads jQuery from <a href="http://mozcom-cdn.mozilla.net/js/jquery/jquery.min.js">mozcom-cdn.mozilla.net</a>, so you can set a preference in <code>about:config</code> to serve the instrumented version of jQuery 1.4.4 when getfirefox.com loads jQuery from its CDN. More information for JsBloat users is available on the <a href="http://code.google.com/p/jsbloat/">project page</a>.<br />
<br />
If you are interested in extending JsBloat to perform your own experiments, the <a href="http://code.google.com/p/jsbloat/">source is available on Google Code under a GPL v2 license</a>. (I don't ordinarily make my work available under the GPL, but because JSCoverage is available under the GPL, JsBloat must also be GPL'd.) For example, if you download JSCoverage and the source code for JsBloat, you can use JSCoverage to instrument your own JavaScript files and then include them in a custom build of JsBloat. Hopefully this will help you identify opportunities to trim down the JavaScript that you send to your users.<br />
<br />
<i>Want to learn more about Closure? Pick up a copy of my new book, <a href="http://www.amazon.com/gp/product/1449381871?ie=UTF8&tag=bolinfestcom-20&link_code=as3&camp=211189&creative=373489&creativeASIN=1449381871">Closure: The Definitive Guide (O'Reilly)</a>, and learn how to build sophisticated web applications like Gmail and Google Maps!</i>Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com2tag:blogger.com,1999:blog-2246259935483101149.post-63917009046625102622011-04-06T10:22:00.000-04:002011-04-06T10:22:37.482-04:00Suggested Improvements to JSONToday I'm publishing an essay about my <a href="http://bolinfest.com/essays/json.html">suggested improvements to JSON</a>. I really like JSON a lot, but I think that it could use a few tweaks to make it easier to use for the common man.<br />
<br />
One question some have asked is: even if these changes to JSON were accepted, how would you transition existing systems? I would lean towards what I call the Nike approach, which is: "just do it." That is, start updating the <code>JSON.parse()</code> method in the browser to accept these extensions to JSON. What I am proposing is a strict superset of what's in <a href="http://www.ietf.org/rfc/rfc4627.txt?number=4627">RFC 4627</a> today (which you may recall claims that "[a] JSON parser MAY accept non-JSON forms or extensions"), so it would not break anyone who continued to use ES3 style JSON.<br />
<br />
At first, you may think that sounds ridiculous -- how could we update the spec without versioning? Well, if you haven't been paying attention to the HTML5 movement, we are already doing this sort of thing <em>all the time</em>. Browser behavior continues to change/improve, and you just have to roll with the punches. It's not ideal, but yet the Web moves forward, and it's faster than waiting for standards bodies to agree on something.<br />
<br />
Though if the Nike approach is too heretical, then I think that versioning is also a reasonable option. In the browser, a read-only <code>JSON.version</code> property could be added, though I don't imagine most developers would check it an runtime, anyway. Like most things on the web, a least-common-denominator approach would be used by those who want to be safe, which would ignore the version number. Only those who do <a href="http://infrequently.org/2011/01/cutting-the-interrogation-short/">user-agent sniffing on the server</a> would be able to serve a slimmer JavaScript library, though that is already true for many other browser features today. I trust <a href="http://www.browserscope.org/">Browserscope</a> and <a href="http://caniuse.com/">caniuse.com</a> much more than any sort of formal specification, anyway.<br />
<br />
Special thanks to <a href="http://kushaldave.com/">Kushal</a>, <a href="http://minivishnu.net/">Dolapo</a>, and <a href="http://blog.persistent.info/">Mihai</a> for providing feedback on my essay.Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com6tag:blogger.com,1999:blog-2246259935483101149.post-19421479361048540592011-04-04T10:19:00.001-04:002011-04-04T11:11:46.300-04:00What I learned from üjsFor April Fool's, I released a mock JavaScript library, <a href="http://üjs.com/">üjs</a> (pronounced "umlaut JS"). The library was "mock" in the "mockumentary" Spin̈al Tap sense, not the unit testing sense. It took me about a day to create the site, and I learned some interesting lessons along the way, which I thought I would share:<ul><li>In purchasing the domain name <a href="http://üjs.com/">üjs.com</a>, I learned about <a href="http://en.wikipedia.org/wiki/Punycode">Punycode</a>. Basically, when you type üjs.com into the browser, the browser will translate it to the Punycode equivalent, which is <a href="http://xn--js-wka.com/">http://xn--js-wka.com/</a>. On one hand, this is what prevents you from buying göögle.com and creating a giant phishing scheme; on the other hand, because most users are unfamiliar with Punycode, going to a legitimate domain like üjs.com looks like a giant phishing scheme due to the rewrite.<br />
<li>I only promoted üjs through three channels: <a href="http://twitter.com/#!/bolinfest/status/53676791633690624">Twitter</a>, <a href="http://news.ycombinator.com/item?id=2396542">Hacker News</a>, and <a href="http://www.reddit.com/r/javascript/comments/ggice/faster_lighter_js_library_üjs/">Reddit</a>. On Twitter, I discovered that using a Punycode domain as your punchline really limits its reach because instead of tweeting about üjs.com, many tweeted about xn--js-wka.com (because that's what you can copy from the location bar after you visit the site), or (somewhat ironically) a URL-shortened version of the domain. I suspect more people would have followed the shared link if they could see the original domain name.<br />
<li>According to Google Analytics, just over half of my traffic (50.04%) on April 1 was from Hacker News (where I made it to the front page!). Another 9.89% was from Reddit. Analytics also claims that only 1.57% of my traffic came from Twitter, though 29.76% of my traffic was "direct," so I assume that was from Twitter, as well. On April 1, I had 3690 visits, and then another 443 on April 2 (presumably from the eastern hemisphere who woke up on Saturday to an aftermath of Internet April Fools' pranks).<br />
<li>GoDaddy was not able to do a domain search for <b>üjs.com</b>, presumably due to the non-ASCII characters. It turned out that <a href="http://www.name.com/">name.com</a> was able to, so I ended up going with them. (Presumably if I understood Punycode well enough at the outset of this project, I could have registered xn--js-wka.com through their site.)<br />
<li>I found a discount code for name.com, so my total cost out of pocket for this project was $8.75 (my hosting fees are a sunk cost).<br />
<li>Not that it was a substantial investment, but I was hopeful/curious whether I could break even in some way. I felt that traditional ads would have been a little over the top, so instead I decided to include two <a href="https://affiliate-program.amazon.com/">Amazon Associates</a> links. Although I produced 350 clicks to Amazon (!), I failed to generate any conversions, so I did not make any money off of my ads.<br />
<li><a href="http://www.chartbeat.com/">Chartbeat</a> is really, really cool. It made it much more fun to watch all of the web traffic to üjs.com during the day. (I wish that I generally had enough traffic to make it worth using Chartbeat all the time!) I believe that I had 144 simultaneous visitors during the peak of üjs, and I was amazed at how dispersed the traffic was from across the globe.<br />
<li>One thing that I did not realize is that Chartbeat does not do aggregate statistics. Fortunately, I set up Google Analytics in addition to Chartbeat, so I had access to both types of data.<br />
<li>Response times were about 1s on average during peak traffic times. At first, I thought that was horrendously slow, but then I realized that there were a large number of requests coming from outside the US, which increased the average. Most of the requests from the US loaded in the low hundreds of milliseconds, which made me feel good about <a href="http://rimuhosting.com/">my hosting choice</a> (who <a href="http://twitter.com/#!/rimuhosting/status/52113750106443778">really is excellent</a>, btw).<br />
<li>The <b>n̈</b> in Spin̈al Tap is not a valid Unicode character. Instead, it is displayed by printing an <b>n</b> followed by an umlaut, and what I can only assume is the kerning makes the umlaut display over the previous character. (The umlaut does not display correctly on my Chrome 10 on Windows, but it's fine on Linux.) Other characters, such as <b>ü</b> are valid Unicode and can be displayed with a single code point. <a href="http://en.wikipedia.org/wiki/Umlaut_(diacritic)">Wikipedia</a> has a list of letters that "come with umlauts," so I used those characters whenever possible on üjs.com, but for others, I had to use the "letter followed by an umlaut" trick. <br />
<li><code>var n\u0308;</code> is a valid JavaScript variable declaration, but <code>var \u00fc;</code> is not.<br />
<li>Initially, the GitHub badge on my page did not link to anything, as I just wanted to satirize the trend I've been seeing in open source lately. Though after a <a href="http://twitter.com/#!/hongrich/status/53839762112516096">request from a coworker</a>, I imported <a href="https://github.com/bolinfest/umlautjs">all of the files I used to create üjs</a> into GitHub. (Incidentally, when I tried to name the GitHub project <b>üjs</b>, they replaced the <b>ü</b> with a hyphen, so I renamed it <b>umlautjs</b>).<br />
</ul>In the end, even though I did not make my $8.75 back, I had a really great time with this project. Although it's clear that <a href="http://news.ycombinator.com/item?id=2396936">some people on the Web don't enjoy April Fools</a>, I think it is a nice opportunity to see some good satire. (My personal favorite was <a href="http://www.atlassian.com/en/angrynerds">Angry Nerds</a> by <a href="http://www.atlassian.com/">Atlassian</a>.) <p>Further, satire is not completely frivolous: it is (an arguably passive-aggressive) way of making a point. In üjs, mine was this: these tiny JavaScript libraries do not help move the needle when it comes to JavaScript development from the community standpoint. Instead of contributing a tiny library, why not focus on contributing a tiny fix to a big library? Think about how many more people you will affect with a bug fix to jQuery than you will by publishing a single JavaScript file with your favorite four helper functions? Or if you are going to create a new library, make sure that you are doing something substantially different than what else is out there. For example, Closure and jQuery are based on different design principles. Both have their use cases, and they serve very different classes of development, so it makes sense for those separate projects to exist and grow. <p>If you have been following my blog, you probably know that I'm really big on "JavaScript programming in the large," which will be the subject of <a href="http://www.google.com/events/io/2011/">my talk at Google I/O this year</a>. I hope to see you there! <p><em>Want to learn more about Closure? Pick up a copy of my new book, <a href="http://www.amazon.com/gp/product/1449381871?ie=UTF8&tag=bolinfestcom-20&link_code=as3&camp=211189&creative=373489&creativeASIN=1449381871">Closure: The Definitive Guide (O'Reilly)</a>, and learn how to build sophisticated web applications like Gmail and Google Maps!</em>Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com2tag:blogger.com,1999:blog-2246259935483101149.post-83047439313024153452011-04-01T09:00:00.000-04:002011-04-01T09:00:06.666-04:00Awesome New JavaScript Library!Despite months of advocating Closure, I've finally given up and found a superior JavaScript library (and it's not jQuery!): <a href="http://üjs.com/">http://üjs.com/</a>Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com4tag:blogger.com,1999:blog-2246259935483101149.post-31160068947951277912011-01-18T23:21:00.000-05:002011-01-18T23:21:38.324-05:00What are the most important classes for high school students to succeed in software engineering?What are the most important classes for high school students to succeed in software engineering? That is the question that I try to answer in an <a href="http://bolinfest.com/essays/highschool.html">essay of same name</a>.<br />
<br />
Also, this is the first essay I have written using <a href="http://docbookeditor.appspot.com/">NJEdit</a>, which is the editing software that I built (and have since <a href="http://code.google.com/p/docbookeditor/">open sourced</a>) in order to write <a href="http://www.amazon.com/gp/product/1449381871?ie=UTF8&tag=bolinfestcom-20&link_code=as3&camp=211189&creative=373489&creativeASIN=1449381871">Closure: The Definitive Guide</a>. It helps me focus more on content while worrying less about formatting, though it still has a ways to go before becoming my "one click" publishing solution.<br />
<br />
A unique feature of NJEdit is that when I produce the HTML to publish my essay, I also produce <a href="http://bolinfest.com/essays/highschool.xml">the DocBook XML version as a by-product</a>! It's not a big selling point today, but if I ever want to publish anything to print again, I'll be ready! For open-source projects that are slowly creating HTML documentation that they hope to publish as a print book one day, NJEdit might be the solution.<br />
<br />
And if it is, maybe someone will help me fix its bugs...<br />
<br />
<em>Want to learn more about Closure? Pick up a copy of my new book, <a href="http://www.amazon.com/gp/product/1449381871?ie=UTF8&tag=bolinfestcom-20&link_code=as3&camp=211189&creative=373489&creativeASIN=1449381871">Closure: The Definitive Guide (O'Reilly)</a>, and learn how to build sophisticated web applications like Gmail and Google Maps!</em>Michaelhttp://www.blogger.com/profile/14618340371367353616noreply@blogger.com1