jQuery & others vs. Vanilla Web

disclaimer: although debating the usefulness of full-featured libraries in general, the arguments are based on many aspects of the most popular one, jQuery. They should still hold mostly true for the other ones.

A very interesting conversation just happened as Christian Heilman proposed the idea of a book about modern Web authoring without libraries such as jQuery. It turned into a heated debate between Vanilla Web and full-featured libraries supporters. It is obvious that both sides share the common goal of improving the Web and helping Web authors, they also both have valid arguments to defend their vision.

I personally don’t know where to stand so I tried to list the arguments that have been raised by libraries and Vanilla proponents there and in other places.

File-size. Size does matter?

jQuery weighs around 32Kb minified and gzipped. At the time of writing, the average weight of a page is 1 239Kb and it only keeps increasing. The size of a single lib appears to be insignificant and a developer should first look into image compression and lazy-loading to reduce the payload.

Average page size: 1239Kb

On the other-hand, it is quite common to pick UI components that requires jQuery UI, some Bootstrap plugins and Raphael based visualizations. Soon enough, the file-size of script and style-sheet dependencies can become significant. Using CDN hosted libraries (from Google’s CDN or cdnjs) increase the chances for these resources to be loaded from cache, but version fragmentation mitigates that benefit; and modular builds aren’t available for the libs that allow it (jQuery UI, dojo).

Conclusion

Most of the time, using one library in a website isn’t a problem, but dependencies can add up to the point where their size becomes a problem.

Performances. It’s how you use it.

There’s a funny “Speed Comparison” table on vanilla-js.com that shows how far better document.getElementById("test") performs compared to jQuery("#test"). That is undeniable, but what matters is that any code outside hot loops doesn’t need the extra-speed; that best-practices can help avoid most bottlenecks (use event delegation, modify class names instead of inline styles, …); and that libraries benefit from many clever optimizations that are not always found in Vanilla code.

Conclusion

By using best-practices, it is possible to write code that performs equally well with or without libraries. The opposite is true.

YAGNI vs. Don’t Reinvent the Wheel

A single UI component doesn’t need all features exposed by generalist libraries. There might even be a lot of useless features for an entire application: code for backward compatibility with IE6-7 (e.g. a selector engine), JS animations or complex XHR. All libs have switched to a modular architecture, but their granularity will never be good enough to bundle “just what this component needs”.

On the other hand, there are still many features offered by these libs that aren’t trivial to implement even when targeting the most recent browsers: delegated event listener, node-list looping, white-listed DOM builder or offset getter, to name a few. Embedding those features into a component isn’t reasonable: developers need reusable, reliable, documented and maintained utilities implementing these features. Isn’t it what our current libraries are?

Conclusion

Current libraries are bloated… but they’re so useful and their level of quality so hard to match!

Moving the Web forward.

Time spent rewriting features already available in current libs is time not spent building tools that have never been seen before. And time spent learning about low-level DOM stuff is time not spent learning about the tools that can improve productivity.

But as the Web changes, being able to question the way websites are built is fundamental: “Can I adopt a gracefully degrading approach and use CSS here instead of JS?”, “How hard would it be to implement X when targeting only IE8 (or 9) and better browsers?”. If this leads to better websites and tools, who would complain?

Conclusion

Learning what picks one’s curiosity is a good way to move the Web forward, whether that is low-level or high-level knowledge doesn’t really matter.

General conclusion

Including a full-featured library or sticking to Vanilla Web in a project shouldn’t be a reflex but the result of a well informed choice. A good lib will still prove useful in most cases, but I personally appreciate all efforts to build lib-independent components that give developers the ultimate choice of using a library or not.

You see additional reasons to use full-featured libraries or to leave them behind? Feel free to share!

Working around CSS Transitions bugs with getComputedStyle

There are numerous problems and browser inconsistencies with the current implementations of CSS Transitions, as demonstrated on csstransition.net. The following problematic situations can be solved by extensive use of getComputedStyle():

Animating after display

Let’s say you have an element styled with display: none; left: 0; transition: left .5s; and you want to reveal it and animate its position.

Simply changing the style of the element to display: block; left: 100px; (function display_animate below) won’t work because the browser didn’t take the initial position of the element into account before changing it. You can force the browser to behave as expected by accessing the computed style of the element right after displaying it (function display_animate_fix below).

Muting transitions

What if an element is styled with display: none; left: 0; transition: left .5s; and you want to jump to a different position and animate it back to its origin.

The trick is to temporarily disable the transitions is to hide your element with display: none;, modify its style and then display it again (function jump_animate below). But as you might have guessed, this won't work unless you give the browser an opportunity to realize what's happening... by accessing the computed style of the element, once after hiding it, and once after revealing it (function jump_animate_fix below).

Note that it is also possible to temporarily set the element to transition: none;, in combination with two getComputedStyle, but this requires more code because of vendor prefixes.

Animation from/to auto

This is one of the most requested feature of CSS Transitions but it has never been spec.ed clearly. It doesn't work at all in Firefox (see associated bug) and Opera, Webkit guys tried to fix it but I think they made it worse, and IE10... I don't know.

Instead of detailing all the methods that don't work, I'll present the ones that do:

And you thought CSS Transitions would allow you to animate your pages without JS?

Activable: a modern approach to UI components

Activable is a new step between current libraries of UI components and the upcoming Web Components: a lightweight, IE8 compatible script that enables to build common UI components in an entirely declarative way. In this post, I explain the rationales behind this project.

The newcomer

A few weeks ago I dived into Twitter Bootstrap, trying to understand what the fuss was all about. This project combines a CSS framework (a boilerplate CSS + a grid system + some layouts) with a library of UI components powered by jQuery and custom plugins. TB is modern in many ways:

  • it includes CSS mechanisms to build responsive designs,
  • the UI components are progressively enhanced with CSS3,
  • it ditched support for IE6 and has a good support for recent smartphones and tablets.

Despite its reduced file-size and fresher theme, the library of UI components has nothing revolutionary compared to the 5 years old and already venerable jQuery UI:

  • it depends on jQuery,
  • each component need to be initialized: $( elem ).component();

Realizing that made me cringe a little, and share my disappointment…

The declarative era

How long until we can write HTML/CSS only and have functional UI components? Let’s consider the current situation:

  • it’s 2012 and we’re still waiting for Mozilla to implement <input type="range" />,
  • Dijit (Dojo’s UI library) claims to offer declarative components, but they’ll only work if present in the document on DOMContentLoaded,
  • Web components will, in the future, give Web developers the ability to develop clean & reusable UI components (if you haven’t read the intro, I advise you to do so), but a third of our users still browse the web using IE8 or worse.
Let’s face it, I’ll have to initialize my UI using JS for the ten years to come.
…All I asked for was the ability to automatically apply a behavior to some special <div>s…
― me, one month ago

Event delegation to the rescue

All hope is not lost, for event delegation makes it possible to register a single click listener on a document, and filter the events bubbling up the tree according to their target. Using jQuery for instance, one can register a global delegated event handler in the following way:
$( document ).on( "click", "li.item", function handler() { … } );
In this case, the handler would be executed for every click inside <li> elements of class item.

Creating a declarative tab component is then just a matter of associating a behavior with some special markup:

Behavior
A tab component consists of a list of labels and content panes. When a label is selected, the associated content is displayed. The content of the other tabs in the same component are hidden.
Markup
What need to be identified in the markup are: the instance of the tab component (.tab), the clickable labels (<a>), their associated content (<div>), as well as “which tab is currently displayed” (.active). The relationship between labels and their content can be expressed using internal links.
Glue
And finally write the JS that adds and removes the .active class in a delegated click handler, and the CSS that highlights active labels and shows active content panes.

Introducing Activable

Activable is the JS glue described above.

Note that the suggested .tab class has been replaced by a generic data-activable attribute. The JS tab of this fiddle has been included to show how much lines of JS were written to create this specific demo (if it’s blank it’s not a bug).

If this components looks familiar to Twitter Bootstrap users, it’s because its CSS framework is being used in this example. In fact, it is possible to recreate many of TB’s UI components, using Activable only…

Generalization

The behavior “when one tab is clicked it becomes visible and the others are hidden” happens to be very similar to other component behaviors:

  • “When one accordion pane is clicked, it expands and the others collapse”,
  • “When one radio button is checked the other buttons are unchecked”,
  • “When one branch of a tree-list is clicked, its children become visible and the other children are hidden”,
  • “When one image in a carousel becomes visible, the other images are hidden”,

This could be generalized to “When an element in a group becomes active, the others become inactive“. One simple variation of this behavior would be to allow the state of an element to be toggled, and another variation would be that state changes don’t affect other elements of the same group.

Activable implements these three behaviors and can be used to implement tab/accordion/radio buttons/checkbox buttons/tree-list and even carousel components, as illustrated in this demo.

Using and extending

Although there is no dependency on any CSS framework, there are some conventions to respect in your markup to make your UI components “activable”. Those conventions are detailed in the documentation, along with the JS API that can be used to extend the components:

  • event listeners: .on( "activate", callback ) / .on( "deactivate", callback ) / .off( type, [callback])
  • direct modifiers: .activate() / .deactivate() / .toggle()
  • iterative modifiers: .next() / .prev()

What’s next

Activable is not meant to replace Twitter Bootstrap or jQuery, it’s only here to help you develop UI components quicker and keep your code clean. If I was a marketing guy, I’d say that Activable is the Backbone of UI component libraries (with a scope currently limited to click-based interactions).

In the future I’ll develop another script for draggable, droppable and sortable components, as well as one script for hoverable components. For now, I’d really like to see some people experiment with Activable and provide some feedback about the behaviors, the conventions and the API.

Mozilla addons: jQuery Injection / Script File Injection

UPDATE: @ZER0 suggested to use data.url instead of data.load

In my last blog post I described different ways of interaction between content scripts and pages. One of them was to inject inline scripts in the page. Today I describe a way to inject scripts files such as jQuery in a page.

If the value or function you want to insert in the page is not dynamically generated (i.e. it never changes), you’d better put it in its own file. To do that, you simply need to retrieve the url of the script file from the addon code, pass it to the content script and then inject it.

Passing The URL

The script that you want to inject in the page should be stored in the “data” folder, just like a content script. The URL is retrieved with the usual data.url utility and then passed to a content script using “port.emit” / “port.on“.

// in main.js
require("page-mod").PageMod({
  include: "http://example.com/",
  contentScriptWhen: "start",
  contentScriptFile: data.url( "injecter.js" ),
  onAttach: function( worker ) {
    	worker.port.emit( "init", data.url( "toBeInjected.js" ) );
  }
});
// in injecter.js
...
self.port.on("init", function( scriptURL ) {
  // use the URL
  ...
});

Using an “init” event just to pass a value is rather annoying, but a “contentScriptGlobals” should be added to a future version of the SDK to simplify this, see bug 688127.

Injecting the Script

The code of the injecter is pretty straightforward.

// injecter.js
var script = document.createElement( "script" );
script.type = "text/javascript";

self.port.on("init", function( scriptURL ) {
  script.src = scriptURL;
  window.document.body.appendChild( script );
});

If you have a cleaner way to inject a script in a page, I’d love to hear from it!

Mozilla addons: Interactions between Content Scripts and pages

Addons created with Mozilla’s Addon SDK have a secure mean of communication with web pages: Content Scripts. Although the interaction between addons and content scripts is fairly well documented, this doc lacks some details about the range of possible interactions between content scripts and the page itself.

The window Wrapper

Content scripts have access to a window object which is actually only a wrapper around the DOM of the page. The entire DOM API is available, so you can manipulate the page at will, but variables and functions stored in the global scope of the page aren’t available:

// inline script in example.com/index.html
window.test = "value";
// main.js, our addon
require("page-mod").PageMod({
  include: ["*"],
  contentScriptWhen: "end",
  contentScript: "console.log( 'test: ' + window.test );"
});

When loading example.com/index.html, the previous page-mod will display “test: undefined” in the error console. Likewise, it is impossible to create a variable in the content script, and access it in the page.

// main.js, our addon
require("page-mod").PageMod({
  include: ["*"],
  contentScriptWhen: "start",
  contentScript: "window.test = 'value';"
});
// inline script in example.com/index.html
console.log( "test: " + window.test );

Again, the inline script will log “test: undefined“.

The SDK docs also demonstrates how easy it is to listen to mouse events in the page and pass them to the addon:

// content script loaded by the page-mod
contentScript =
  "window.addEventListener('click', function( event ) {"+
  "  self.port.emit('click', event.target.toString());"+
  "});";

Moar Interaction Pliz

Manipulating the DOM and waiting for mouse or key events is good, but what if you want to pass data to your page or receive data from it? what if you absolutely need to create variables or functions in the page?

The wrong way of doing so is to use the undocumented unsafeWindow object instead of the wrapped and secure window one. This completely defeats the security model of the addon SDK, and since it’s an undocumented feature, it could well disappear in a future update of the SDK.

The right way of doing so is to inject inline scripts in the page and/or use customEvents:

Inline Script Injection

Inline scripts are a third level of scripts (where 1O hours are a minute) that can be used to create variables or functions in the global scope:

// Quotes nesting would be a real mess now,
// let's use a separate file loaded as a contentScriptFile
var script = document.createElement( "script" );
script.innerHTML = "var test =" + JSON.stringify( anyValue )+";";
document.body.appendChild( script );

Validation of this code will output warnings, as addons shouldn’t create script tags. But this is still the safest alternative to the unsafeWindow.

customEvents

customEvents are very similar to mouse events, except that they are programmatically triggered, and that they can transport arbitrary (jsonable) data between content scripts and pages, in both directions:

// inline script in example.com/index.html
var evt = document.createEvent("CustomEvent");
evt.initCustomEvent( "transData", true, false, jsonableValue );
window.dispatchEvent(evt);
// in contentScript.js
window.addEventListener( "transData", function( event ) {
  // the data is available in the detail property of the event
  self.port.emit( "transData", event.detail );
});

I had to use both of these technics in my geolocation debugging addon (geo-devtool) to interact with the Google Maps API, and I’m pretty sure there are much more use cases.

PS: I’m willing to propose improvements to the SDK docs on this topic once the content of this blog post is validated by someone in the addon team.

CORS: an insufficient solution for same-origin restrictions

UPDATE2: The main criticism I have received is that people will click “allow” without understanding the security implications of this action. I don’t know yet how to mitigate this risk, but the statu quo is not an option as the problem already exists: one can already develop cross-browser addons able to perform cross-domain requests after a single-click installation.

UPDATE1: I’ve added technical details of how “user enabled CORS” would work.

The same origin policy is the most fundamental principle of security built in web browsers: a web page cannot access the content of websites hosted on other (sub)domains. Developers came up with JSONP, and XHR2 introduced CORS to work around the strictness of this policy. Both mechanisms need to be implemented by API/content providers (Twitter, Facebook, Google Maps, Flickr, …) and leave no power to third party web devs and users. This severely impedes the hackability of the Web.

I want to see the Web succeed at providing a cross-platform (mobile) apps environment, and stand competition with other proprietary environments. I’m thrilled by the progress of WebAPIs lately, and yet, I find myself often stuck when trying to create basic webapps.

Same Origin Restrictions

While experimenting with foreignr, an offline enabled client for Google Maps, I found myself limited by the impossibility to dynamically cache images across different origins. I wrote about it, and ultimately had to write a client/server library to abuse appCache through iframes (see mediaCache).
Recently I started experimenting with a Twitter client and soon hit the limitations of their JSONP API: every request (save public read) have to be sent with an oAuth signature as a custom header. Since JSONP is transported using a <script> tag instead of an XHR object, custom headers aren’t available.

CORS to the rescue?

Both problems would be solved if Twitter and Google Maps added a simple header to their API: Access-Control-Allow-Origin: *. Unfortunately they don’t, and show no sign of willingness to do so. In fact, despite efforts to promote widespread adoption of CORS, only a very limited number of (small) sites implement it. Many websites provide JSONP APIs, but CORS has much more use cases:

  • uploading pictures to an image hosting website (already allowed by imgur.com),
  • storing images or any media locally for offline use,
  • enabling cross-origin oAuth requests,
  • using images or videos as WebGL textures,
  • processing image data inside a canvas element…

Doing any of that without CORS requires to use a server acting as a same-origin proxy to the targeted APIs/resources. What a costly waste. Meanwhile, every single desktop and mobile SDKs allow developers to do such things, even when based on Web technologies such as Blackberry WebWorks, WebOS SDK or PhoneGap.

CORS and beyond

prompt UI of user enabled CORS
Just like we allow or will allow webapps to do such sensitive operations as geolocating the user, or accessing one’s phonebook after obtaining permission, it should be possible to use public features and resources of one particular website on a different origin (i.e. what a user can access without being logged in).
The security implications are important, but one could imagine an opt-out Access-Control-Prevent-Origin: * header, blacklists and other ways to mitigate the risks.

This mechanism would be no different than current XHRs:

// script on http://example.com
var xhr = new XMLHttpRequest();
xhr.open("GET",
  "http://api.twitter.com/1/statuses/home_timeline.json", true);
xhr.setRequestHeader("Authorization", oAuth);
xhr.onreadystatechange = function ( e ) {
  ...
};
xhr.send();

Executing previous code would prompt the user for permission and a request would be sent only if permission is given, otherwise it would fail with a usual cross-origin error. Cookies associated with the target address would not be sent with this request; it would still have to be authorized through oAuth, even if the user is logged in to Twitter.

Moving forward

I can keep on asking API providers to add CORS headers, I will, hoping that one day it will become as widespread as JSONP. But there will always be new APIs that don’t add it immediately, and there will always be companies that don’t listen to their users. And until Web developers have a way to work around the same origin policy, the Web will not be a suitable platform for many apps.

I’d like to hear about other alternatives to CORS and ways to mitigate the risks. If you’ve got an idea, speak up, and let’s move the web forward.

Appcache & Beyond

Here’s the presentation I gave for the first edition of LyonJS. Firefox or Chrome are required to view those slides, sorry to the others, I’ll try to fix that next time.

Appcache use-cases

I’d like to reorient my conclusion slightly, after the discussion we had at the end of my talk.
As it has been pointed out, offline webapps are not the only use-case for appcache. It can also be used for all webpages that get their content from third party webservices, through ajax APIs. This is the case for ParisJS.org, LyonJS.org and a growing number of websites that are pure mashups of other webservices.

You forgot to say please

In this case, the benefits are better performances. But my point is that, if appcache needs some improvements (and I think it does!), the goal shouldn’t be to make it the webperf hammer of every website. There is one unique reason for that: the browser should be the one in charge of deciding what deserves to be cached, and what should be evicted from it. We can’t let every website store 200K-1M of resources on a device without asking for permission and providing a simple UI to free memory.

“Evictable” appcache

I actually believe that “asking for permission” is not the best option we can come up with, as users shouldn’t have to manage websites like installed app (if they’re not meant to work offline), this would feel backward. I’d like to see an “evictable” flag being added to the spec (just like the one proposed for indexedDB) that would let the browser know that it’s safe to remove a cache group when needed. With such a flag, user’s consent wouldn’t be required to let websites use appcache. For offline webapp, users would have to grant permissions and they would be the one in charge of “uninstalling” the webapp.

Would that be an acceptable behavior for you?

Offline Web Applications, we're not there yet.

Being able to use applications offline is an important feature in the quest for native like (mobile) web apps. The offline mechanisms currently available are indexedDB and appcache. Developers who’ve already used the latter know that it’s at least clumsy, and sometimes useless. If you haven’t, here’s why.

Use case, use case, use case

I want to create a mobile client for Twitter, no wait! I want to create a mobile client for Google Maps, no wait! I want to create a mobile news reader, no wait! …
Anyway, I’m a strong believer believer in the Open Web and a challenge taker, I don’t even want to use phonegap.
To display the home page and the different views of my app, I need a way to store the HTML, CSS and JS. To store the tweets, maps or news items of my app in a manageable way, I need a database. This content should be updated as soon as a connection is available; same goes for the HTML, CSS and JS that runs the app. More importantly: my tweets, maps or news items aren’t only text, I need to store the associated images or profile pictures.

What appcache offers

The main way to use the appcache is the cache manifest: a text file associated to an HTML page using the manifest attribute.

<!DOCTYPE html>
<html manifest="index.appcache">
<head>
...

The manifest itself consists mainly in a list of resources that should be cached and made available for offline use.

CACHE MANIFEST

CACHE
# resources listed in this section are cached
styles.css
script.js
jquery.js
logo.png

NETWORK
# resources matching those prefixes will be available when online
http:/search.twitter.com/*
http:/api.twitter.com/*

Here’s how a browser deals with appcache (simplified):

For more info, have a look at appacachefacts or the spec.

If the appcache has been udated, the browser does not refresh the page to display the new content. It is however possible to listen to updateready events and propose users to reload the page:

if (window.applicationCache) {
    applicationCache.addEventListener('updateready', function() {
        if (confirm('An update is available. Reload now?')) {
            window.location.reload();
        }
    });
}

Is that an acceptable user experience?
Having to do that every time the content of your app is updated is not acceptable (that’s why appcache should not be considered as a way to improve performances of normal websites). It would be acceptable if all the content of the app was updated using XHR, and only the structural HTML, CSS and JS was listed in the cache and updated occasionally (this would actually be a simpler update mechanism than the ones used by native apps). But current websites are usually not developed this way, as this can cause some accessibility and crawling problems.

Storing and deleting content

To store the content of my app in a manageable way, some kind of database is required. Luckily enough, IndexedDB will soon be available in the five major browsers. It will allow tweets, maps or news items to be first stored in plain text, as JSON or HTML; and later listed, sorted, updated and deleted.

What about the associated images?
That’s the tricky part when building a third-party app for a popular API such as Twitter or Google Maps: images loaded from a foreign origin cannot be turned to data-URLs for security reasons (unless they’re served with CORS headers, but they’re not; and using a server as a proxy to set CORS headers doesn’t scale, as those APIs have IP based usage limits).

Those images can still be listed in the cache manifest: a manifest server could be built to keep track of what should be cached for each user; it would dynamically generate a new manifest every time an image has to be added or removed; and once the new manifest is available, the client would do a applicationCache.update().
This is however absolutely unrealistic: First, a server is required, and that’s very much unfortunate. Then, when updating the cache, the browser throws away all of its current content and loads all resources again… This means that every-time a single image has to be added or removed from the cache, all other resources are downloaded again!

Can I build offline web apps, now?

App logic available offline appcache
Text content available offline indexedDB
App up-to-date when online appcache + prompt user to reload
Content up-to-date when online indexedDB + update using XHR
Foreign images available offline not yet

How should appcache be improved?

Content update

Yehuda Katz’s suggestion is to let developers specify that the manifest should be checked for updates before the page is loaded from the cache. If it has been updated, the page should be downloaded from the network.
This sounds like a good idea when XHR updates are not an option, and would probably yield good performances. I’m just afraid this could be abused by many websites just to improve web perfs, and users would end up with 1MB of HTML, CSS, JS and images on their disk for every site they ever visited in their life :-/
Ian Hickson suggested to rename the appcache to “offline application store”. Although this is unlikely to happen, I would also advocate using it only for offline purposes, and leaving caching strategies to the browser.
A compromise could be to adopt Yehuda’s solution, but always prompt users for permission to use the appcache. This should effectively prevent cache bloat. Not the perfect solution, sometime ago I actually opened a bug in Firefox to remove the user prompt: application cache should not bother users.

Dynamic cache

The best suggestion I’ve seen to address the need for a dynamic cache is the defunct DataCache proposition. The idea is to add an independant and dynamic cache to the static appcache, to let developers add and remove resources from the cache at will.
I confess I had trouble understanding all the details of the spec, but here’s a naive and rough API I’m proposing:

applicationCache.dynamicStore.add( uri );
applicationCache.dynamicStore.remove( uri );
applicationCache.dynamicStore.update( uri );

var cacheTransaction = new applicationCache.dynamicStore.transaction();
cacheTransaction.add( uri );
cacheTransaction.add( anotherUri );
cacheTransaction.run();

Of course, it should be possible to listen to events such as “updated” on applicationCache.dynamicStore.

This spec also introduces the interesting concept of offlineHandlers that allows XHRs to be rerouted to client-side functions when offline.

navigator.registerOfflineHandler( "search.twitter.com/*", function( request, response ) {
  // search for results in indexedDB instead
  ...
});

I definitely advise you to take a look at the code examples.

Conclusion

We’re not there yet and unfortunately I’ve got no ideal solution to the content update problem that provides both a good developer and user experience. “What Yehuda says + permission prompt” is the least worst option I can think of.
To make the appcache dynamic, I would strongly suggest giving a second chance to DataCache, and maybe simplify the API a bit.

The W3C will soon gather to think about the future of offline web app, so if you’ve got any valuable input, speak up!

Firefox Aurora: With Great Power Comes Great Freedom

This is a more detailed version of the article posted today on hacks, aimed at a more knowledgeable audience.

Aurora Logo
New features can only become stable and usable if you test them and push them to their limits. And you can only have fun playing with new features if it’s in a hassle- and risk-free way. You deserve better than a beta, you deserve Aurora.

What is Aurora?

Up to Firefox 4, Web authors only had limited options: either live in the quiet comfort of stable releases and wait months for new features to appear; or play Russian roulette with the nightly builds, exciting but painful. Beta releases aren’t filling the gap if we want to deliver features more often and increase their quality at the same time: we need more people to give us feedback and report bugs earlier. This implies to provide a safer environment to test features and a simpler way to switch to a more stable version of the browser (we call them channels).

Aurora is this safer environment.

6 Week Release Cycles!

New features added to Firefox are delivered on a daily basis through the nightly builds. Every six weeks, a new version of Aurora is delivered: it is roughly a nightly build, stripped of the features that are problematic or make Firefox unstable. During six weeks, Aurora will not be updated with new features, but will be updated with bug fixes on a daily basis. After six weeks, the features that are stable enough will land in the Beta channel, which receives weekly bug fixes. Another six weeks later, the features that are ready move from the Beta channel to the stable one: a new Firefox is born. The strictness of the quality control increases as features progress to later channels.

If you’re into fancy charts and technical details, have a look at the process specifics details. If you’re more into metaphors, think about the journey of a feature as the youth of a bird: engineers are our (free range) egg-laying chickens who want to see their features leave the nest, and you can help by watching over and taking care of the features during the gestation, brooding, rearing, and be as proud as we are to see them finally fly their own wings.

Switching between channels

Once you have downloaded and installed Aurora, you can switch to other channels at will. Simply click on the Help menu and open the About window.

Aurora About window

Firefox Aurora currently includes, amongst other nice improvements, CSS3 Animations, window.matchMedia and better tab-closing behavior. Give it a try and tell us what you think.

defaut display value of any DOM element

library authors have long been struggling to find a bullet proof function that will give the default display value of any DOM element. Average Web developer shouldn’t worry about it, but the solution had to be logged somewhere on the internet.

Consider the following kind of code:

The library used for animation will have to set the display value of the address to something else than none before animating it’s opacity from 0 to 1.

What should this display value be?

One way would be to use a hash object to store the default display value of elements:

var defaultDisplay = {
  div: "block",
  span: "inline",
  p: "block",
  b: "inline",
  td: "table-cell",
  ...
}

But this object would be rather large, considering the list of HTML5 elements. It can also be guaranteed that some day, other elements will be added to this list, and the library would be incompatible with it. The solution found in most libraries is similar to this one:

However, this function fails with the code found in the first snippet, as it will return “none” for any address, figcaption, mark or ruby inserted in this document, because that’s what the CSS says!

iframe to the rescue

In jQuery, we resorted to use an iframe to sandbox the dummy elements we create: the iframe hosts a completely different document, and the elements you append to it are not affected by the original CSS.

It should be noted that creating an iframe is a very expensive operation: on my 3 years old laptop running Firefox4 for Ubuntu, it takes up to 17ms to check the default display value of an element. This is mitigated by trying to use the classical code first (which executes in a fraction of that) and caching the output of the function.