Category Archives: Uncategorized

Working around CSS Transitions bugs with getComputedStyle

There are numerous problems and browser inconsistencies with the current implementations of CSS Transitions, as demonstrated on The following problematic situations can be solved by extensive use of getComputedStyle():

Animating after display

Let’s say you have an element styled with display: none; left: 0; transition: left .5s; and you want to reveal it and animate its position.

Simply changing the style of the element to display: block; left: 100px; (function display_animate below) won’t work because the browser didn’t take the initial position of the element into account before changing it. You can force the browser to behave as expected by accessing the computed style of the element right after displaying it (function display_animate_fix below).

Muting transitions

What if an element is styled with display: none; left: 0; transition: left .5s; and you want to jump to a different position and animate it back to its origin.

The trick is to temporarily disable the transitions is to hide your element with display: none;, modify its style and then display it again (function jump_animate below). But as you might have guessed, this won't work unless you give the browser an opportunity to realize what's happening... by accessing the computed style of the element, once after hiding it, and once after revealing it (function jump_animate_fix below).

Note that it is also possible to temporarily set the element to transition: none;, in combination with two getComputedStyle, but this requires more code because of vendor prefixes.

Animation from/to auto

This is one of the most requested feature of CSS Transitions but it has never been spec.ed clearly. It doesn't work at all in Firefox (see associated bug) and Opera, Webkit guys tried to fix it but I think they made it worse, and IE10... I don't know.

Instead of detailing all the methods that don't work, I'll present the ones that do:

And you thought CSS Transitions would allow you to animate your pages without JS?

Activable: a modern approach to UI components

Activable is a new step between current libraries of UI components and the upcoming Web Components: a lightweight, IE8 compatible script that enables to build common UI components in an entirely declarative way. In this post, I explain the rationales behind this project.

The newcomer

A few weeks ago I dived into Twitter Bootstrap, trying to understand what the fuss was all about. This project combines a CSS framework (a boilerplate CSS + a grid system + some layouts) with a library of UI components powered by jQuery and custom plugins. TB is modern in many ways:

  • it includes CSS mechanisms to build responsive designs,
  • the UI components are progressively enhanced with CSS3,
  • it ditched support for IE6 and has a good support for recent smartphones and tablets.

Despite its reduced file-size and fresher theme, the library of UI components has nothing revolutionary compared to the 5 years old and already venerable jQuery UI:

  • it depends on jQuery,
  • each component need to be initialized: $( elem ).component();

Realizing that made me cringe a little, and share my disappointment…

The declarative era

How long until we can write HTML/CSS only and have functional UI components? Let’s consider the current situation:

  • it’s 2012 and we’re still waiting for Mozilla to implement <input type="range" />,
  • Dijit (Dojo’s UI library) claims to offer declarative components, but they’ll only work if present in the document on DOMContentLoaded,
  • Web components will, in the future, give Web developers the ability to develop clean & reusable UI components (if you haven’t read the intro, I advise you to do so), but a third of our users still browse the web using IE8 or worse.
Let’s face it, I’ll have to initialize my UI using JS for the ten years to come.
…All I asked for was the ability to automatically apply a behavior to some special <div>s…
― me, one month ago

Event delegation to the rescue

All hope is not lost, for event delegation makes it possible to register a single click listener on a document, and filter the events bubbling up the tree according to their target. Using jQuery for instance, one can register a global delegated event handler in the following way:
$( document ).on( "click", "li.item", function handler() { … } );
In this case, the handler would be executed for every click inside <li> elements of class item.

Creating a declarative tab component is then just a matter of associating a behavior with some special markup:

A tab component consists of a list of labels and content panes. When a label is selected, the associated content is displayed. The content of the other tabs in the same component are hidden.
What need to be identified in the markup are: the instance of the tab component (.tab), the clickable labels (<a>), their associated content (<div>), as well as “which tab is currently displayed” (.active). The relationship between labels and their content can be expressed using internal links.
And finally write the JS that adds and removes the .active class in a delegated click handler, and the CSS that highlights active labels and shows active content panes.

Introducing Activable

Activable is the JS glue described above.

Note that the suggested .tab class has been replaced by a generic data-activable attribute. The JS tab of this fiddle has been included to show how much lines of JS were written to create this specific demo (if it’s blank it’s not a bug).

If this components looks familiar to Twitter Bootstrap users, it’s because its CSS framework is being used in this example. In fact, it is possible to recreate many of TB’s UI components, using Activable only…


The behavior “when one tab is clicked it becomes visible and the others are hidden” happens to be very similar to other component behaviors:

  • “When one accordion pane is clicked, it expands and the others collapse”,
  • “When one radio button is checked the other buttons are unchecked”,
  • “When one branch of a tree-list is clicked, its children become visible and the other children are hidden”,
  • “When one image in a carousel becomes visible, the other images are hidden”,

This could be generalized to “When an element in a group becomes active, the others become inactive“. One simple variation of this behavior would be to allow the state of an element to be toggled, and another variation would be that state changes don’t affect other elements of the same group.

Activable implements these three behaviors and can be used to implement tab/accordion/radio buttons/checkbox buttons/tree-list and even carousel components, as illustrated in this demo.

Using and extending

Although there is no dependency on any CSS framework, there are some conventions to respect in your markup to make your UI components “activable”. Those conventions are detailed in the documentation, along with the JS API that can be used to extend the components:

  • event listeners: .on( "activate", callback ) / .on( "deactivate", callback ) / .off( type, [callback])
  • direct modifiers: .activate() / .deactivate() / .toggle()
  • iterative modifiers: .next() / .prev()

What’s next

Activable is not meant to replace Twitter Bootstrap or jQuery, it’s only here to help you develop UI components quicker and keep your code clean. If I was a marketing guy, I’d say that Activable is the Backbone of UI component libraries (with a scope currently limited to click-based interactions).

In the future I’ll develop another script for draggable, droppable and sortable components, as well as one script for hoverable components. For now, I’d really like to see some people experiment with Activable and provide some feedback about the behaviors, the conventions and the API.

CORS: an insufficient solution for same-origin restrictions

UPDATE2: The main criticism I have received is that people will click “allow” without understanding the security implications of this action. I don’t know yet how to mitigate this risk, but the statu quo is not an option as the problem already exists: one can already develop cross-browser addons able to perform cross-domain requests after a single-click installation.

UPDATE1: I’ve added technical details of how “user enabled CORS” would work.

The same origin policy is the most fundamental principle of security built in web browsers: a web page cannot access the content of websites hosted on other (sub)domains. Developers came up with JSONP, and XHR2 introduced CORS to work around the strictness of this policy. Both mechanisms need to be implemented by API/content providers (Twitter, Facebook, Google Maps, Flickr, …) and leave no power to third party web devs and users. This severely impedes the hackability of the Web.

I want to see the Web succeed at providing a cross-platform (mobile) apps environment, and stand competition with other proprietary environments. I’m thrilled by the progress of WebAPIs lately, and yet, I find myself often stuck when trying to create basic webapps.

Same Origin Restrictions

While experimenting with foreignr, an offline enabled client for Google Maps, I found myself limited by the impossibility to dynamically cache images across different origins. I wrote about it, and ultimately had to write a client/server library to abuse appCache through iframes (see mediaCache).
Recently I started experimenting with a Twitter client and soon hit the limitations of their JSONP API: every request (save public read) have to be sent with an oAuth signature as a custom header. Since JSONP is transported using a <script> tag instead of an XHR object, custom headers aren’t available.

CORS to the rescue?

Both problems would be solved if Twitter and Google Maps added a simple header to their API: Access-Control-Allow-Origin: *. Unfortunately they don’t, and show no sign of willingness to do so. In fact, despite efforts to promote widespread adoption of CORS, only a very limited number of (small) sites implement it. Many websites provide JSONP APIs, but CORS has much more use cases:

  • uploading pictures to an image hosting website (already allowed by,
  • storing images or any media locally for offline use,
  • enabling cross-origin oAuth requests,
  • using images or videos as WebGL textures,
  • processing image data inside a canvas element…

Doing any of that without CORS requires to use a server acting as a same-origin proxy to the targeted APIs/resources. What a costly waste. Meanwhile, every single desktop and mobile SDKs allow developers to do such things, even when based on Web technologies such as Blackberry WebWorks, WebOS SDK or PhoneGap.

CORS and beyond

prompt UI of user enabled CORS
Just like we allow or will allow webapps to do such sensitive operations as geolocating the user, or accessing one’s phonebook after obtaining permission, it should be possible to use public features and resources of one particular website on a different origin (i.e. what a user can access without being logged in).
The security implications are important, but one could imagine an opt-out Access-Control-Prevent-Origin: * header, blacklists and other ways to mitigate the risks.

This mechanism would be no different than current XHRs:

// script on
var xhr = new XMLHttpRequest();"GET",
  "", true);
xhr.setRequestHeader("Authorization", oAuth);
xhr.onreadystatechange = function ( e ) {

Executing previous code would prompt the user for permission and a request would be sent only if permission is given, otherwise it would fail with a usual cross-origin error. Cookies associated with the target address would not be sent with this request; it would still have to be authorized through oAuth, even if the user is logged in to Twitter.

Moving forward

I can keep on asking API providers to add CORS headers, I will, hoping that one day it will become as widespread as JSONP. But there will always be new APIs that don’t add it immediately, and there will always be companies that don’t listen to their users. And until Web developers have a way to work around the same origin policy, the Web will not be a suitable platform for many apps.

I’d like to hear about other alternatives to CORS and ways to mitigate the risks. If you’ve got an idea, speak up, and let’s move the web forward.

designing ToDoSo

The navigation in ToDoSo will occur on three different levels:

  • The folder level allows the user to view all the presentations that he/she has created in the application. Although there are no folders in ToDoSo, this term is used in analogy with the file system explorer that is to be found on the desktop.
  • The presentation level allows the user to navigate within a presentation, through the slides that it is composed of.
  • The slide level is the one at which the player and the editor of the presentation operate: the slide is displayed in full-screen.

Those levels are detailed in a different order to clarify the rational behind their design.

Presentation level

In current presentation software, navigating in a set of slides generally requires to scroll through the thumbnail view. In desktop applications, additional ways of navigation might be provided but they are neither the default choice nor obvious alternatives.

This way of navigation is obviously convenient to navigate to the previous and next slides, however, this could also be achieved by providing two buttons labelled as “previous” and “next”. What is the real value of the thumbnails for those two slides when they could be accessed in full screen by a simple click? The visual consistency of the presentation should be guaranteed by the “theme” and “layout” features of the application, it does not require for the user to remember (or see) how the other slides look like. Likewise, the thumbnail is generally too small to have a readable content. A slide has to be in the main view for its content to be readable.

The usefulness of the thumbnail view resides in the possibility for the user to quickly search for a particular slide, which is likely to be easily identified visually. It requires, that the user points at the thumbnail view with its mouse, then scroll until he/she finds the slide and finally click on it. If it is a long presentation, this can be a rather long process with those slides stacked on top of the others in a single dimension.

The solution that will be explored in ToDoSo is to remove the thumbnail view from the interface and to create a zoomable bi-dimensional map of the slides.

When a presentation is opened, this map is displayed to the user who can simply double-click on a slide or point at it ans use the mouse’s wheel to zoom in. Once a slide is zoomed in, controls appear to let the user access directly the next and previous slides, and to zoom out to search for a particular slide in the map.

This way of navigation can be compared to the one used in Google Maps and which is generaly referred to as Zooming interface (Cockburn et al., 2008) or Zooming Interface Paradigm (ZIP) (Raskin, 2000).

Folder level

It would be tempting to apply the Zooming Interface concept to the folder level to provide a coherent way of navigation throughout the application. However, this would require to display all the slides of all presentations at this level which in turn requires to have very tiny thumbnails to be able to fit multiple presentations on the same view. Instead, it has been decided to present only the first slide of a presentation at this level.

A visual clue should be provided to differentiate this level from the presentation level in addition to the different menus and toolbars, such as a a different background colour.

To switch to the presentation level, the user would simply double click on a presentation to reveal its full content while other presentations are hidden.

Slide level – the player

The player is designed to let the user access to the next and previous slides as well as playing the possible animations of the slide and switch back to the presentation level.

In this mockup of the player, the buttons have been given the look of keys. This emphasise the possibility to use the keyboard as an alternative to control the display of the presentation. Indeed, when the presentation is played during a lecture, the buttons should be hidden and a keyboard or a remote control could be used instead of the mouse. Those buttons actually fade out when the pointer of the mouse is moved away from the slides.

The button with the bigger arrow corresponds to the “play” button: the main control that either triggers an animation or the display of the following slide if all the animations have been played.

Slide level – the editor

This part requires further research before a valuable design could be produced.

Practical Performances in Web applications #1: Making it faster

Important notice: the following post is likely to be largely modified in the next days due to the importance of this topic in the scope of my Master project.

An obvious criteria of usability for an application in general is its performances. This criteria encompasses the initial load time of the application and the potential transitions time between the different states of the application (e.g. the different screens of a desktop applications or the different pages of a Website). There are two complementary ways to tackle it:

  • identifying the technical parameters involved in the length of those load times and taking actions to make the application faster,
  • identifying the psychological factors of perceived performance by the user and taking actions to make the application feel faster.

Making the application faster

Over the past decade, performance on the Web is an area that gained an increased attention as Websites moved from static text-based content to dynamic and multimedia Web applications. The literature is an important source of information about back-end performances, informing about the different servers (e.g. Hu et al 2007), architectures (e.g. Barroso et al 2003) and patterns (e.g. Pyarali et al 1999).

Lately, majors actors of the Internet including Yahoo and Google started to advocate front-end optimisation as an easy and effective way to achieve performances on the Web. Yahoo published guidelines for “High-performance web sites” (Souders 2008 and its online and more up-to-date conterpart: Best Practices for Speeding Up Your Web Site) along with a browser plugin which measures the performances of a Web page against those guidelines: YSlow.

The approach adopted in ToDoSo was to delegate the responsibility of back-end optimisation by using a cloud computing service and an existing Web application framework, while concentrating on front-end optimisations using Yahoo’s resources.

Back-end performances – the infrastructure

One of the fastest growing trends about Web infrastructure during the past two years seems to be the cloud computing.

Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the “cloud” that supports them.
Wikipedia, the Free Encyclopedia

In the case of Web applications such as ToDoSo, it allows project managers to rely on a third’s expertise in building, refining and maintaining an efficient infrastructure which will run the application in optimal conditions.

In addition to the enterprise-oriented cloud computing offers (see DELL, Sun Microsystems, Microsoft, IBM, Joyent and Vmware services), there are now more affordable solutions aimed at developing projects such as Amazon EC2, Google App Engine, Engine Yard and Aptana Cloud.

The latter has been chosen in ToDoSo…

  • because it is based on Joyent services, which is a guaranty that the infrastructure will be able to scale with the application until it becomes the next Facebook,
  • because of its low starting price ($0.027/hour, second only to Google’s free plan),
  • because of its large variety of preconfigured environments (PHP, Java, Jaxer, Rails, Python) which makes it possible to switch easily if the first choice proves to be a bad one,
  • and because of its tight integration within the Integrated Development Environment used throughout this project ‒ Aptana Studio ‒, effectively reducing the complexity of the deployment.

Back-end performances – the development Framework

Performances were not a key factor when it came to choose the development framework for ToDoSo. There is a vast choice of existing frameworks for Web applications; producing an objective comparison of their respective performances would have therefore required a tremendous amount of work that this project could not obviously afford. The choice was limited to the Open Source frameworks available on the chosen infrastructure and was actually based on the gentleness of their learning curve. Additional details are provided in the section related to “Developer Friendly technologies”.

Being Open Source software, it would be possible to fine-tune the application on different levels later in the development process, if required. It should also be noted that the less expensive cloud computing service for the Microsoft .Net platform appeared to be Amazon EC2, starting at $0.125/hour, i.e. 460% the price of the first plan in Aptana Cloud.

Front-end performances – Yahoo guidelines

Yahoo’s “Best Practices for Speeding Up Your Web Site” is a set of 34 best practices which can significantly improve the speed and responsiveness of a Web site. The following part describes which have been applied to ToDoSo and how.

These rules can be summarized into “reduce the amount of external resources to download”.

For that purpose, RockstarApps provides the JsLex plugin for Aptana Studio. Once installed, <link> or <script> tags can simply be selected from the source of a Web page; a dedicated options in the right-click menu can then be used to concatenate, minify using YUI Compressor and archive in gzip the source files all at once, while the selected tags are replaced by a single one pointing at the new optimised resource.

These rules can be summarized in “improve the external resources download speed”.

A [CDN] is a system of computers networked together across the Internet that cooperate transparently to distribute content for the purposes of improving performance and scalability.
‒ Wikipedia, the Free Encyclopedia

Just as for cloud computing, there are enterprise-sized CDN, the most famous one being Akamai as well as solutions adapted to smaller projects. Amazon offers two services, Simple Storage Service (Amazon S3) and CloudFront, which, once combined, constitute a CDN with no minimal fees and a competitive pricing: according to Bob Buffone, the monthly cost for a Website receiving more than 200 visitors a day is around $0.15[1. Just Jump: Start Using Clouds and CDNs].

There is another Eclipse plugin for Amazon Web Services by RockstarApps which allows Web developers to upload static resources to Amazon S3. The gzip compression must be applied before uploading the resources on S3 since, being a passive server, it does not archive files on the fly. Appropriate headers must therefore be set on the files (“content-type: text/css” and “content-encoding: gzip” for a CSS, “content-type: text/javascript” and “content-encoding: gzip” for a JavaScript), which is automatically done for files with the extension “.css.gz” or “.js.gz” by the plugin. In Safari and Google Chrome however, the extension takes precedent on the headers; the .gz suffix has therefore to be removed, which is not possible in the current version of the plugin.

Yahoo developed its own image optimisation tool in accordance to this guideline,, which can be used directly from within the YSlow plugin. As of July 2009, Gracepointafterfive developed their own tool, punypng, claiming for their state of the art solution to be second to none. Although no benchmarks have been produced by independent testers, the early results exposed by the creator of the tool seemed encouraging enough to justify its use for ToDoSo.

During the development of the different prototypes of this project, JavaScript already proved to be of a major importance in the operation of the Graphical User Interface. The library of choice for this project was jQuery, a lightweight piece of JavaScript (about 19KB once minified and archived) with a clean and easy to learn API making DOM traversing and manipulation, event handling, effects and ajax implementation a breeze. An important documentation and optimisation work has been undertaken to use the library in compliance with those guidelines.

  • Finally, some instructions have been ignored because either because they were not relevant to the chosen development framework (e.g. Flush the Buffer Early) or because the problem they were dealing with did not appeared in ToDoSo (e.g. instructions related to the use of cookies or iframes)


By getting familiar with simple rules that have then been applied during the development of the application, choosing affordable cloud computing and CDN solutions and using few simple tools aimed at front-end performances improvements, the load time of the project’s home-page is now inferior to 2 seconds despite the dozen images displayed on it.

It should however be noted that some of those strategies have not yet been applied to the two online prototypes, since they are early previews that will be vastly modified before being usable in real conditions.

Improved readability

Since I’m producing rather long posts these days (and there are more to come), I figured out that some adjustment in the the layout of the pages would be more than welcome.

Nothing has changed on the home page, but when clicking on the title of a post (or following the links in a feed reader), the new width, line-height and font-size should prevent the “there’s-no-way-I’m-reading-that!” effect.

Event delegation with jQuery1.3

The ability to bind event listeners to DOM elements using JavaScript is a feature that is the base of elaborated interactions in Web pages. Using this mechanism, it is for example possible to submit a form, load another page or updating the current one when a user clicks on a specific button.

Simple events

The code required to bind a single listeners to an element is simple and works across browsers:

[code lang="js"]
document.getElementById("myButton").onclick = function() {
alert("My button has been clicked");

When a second listeners is bound to the same event on this element, it will however override the first one.

[code lang="js"]
document.getElementById("myButton").onclick = function() {
alert("Only this alert will now be displayed");

Clean event listeners

Current Web browsers offer a clean way to add event listeners to an element without overriding the previous ones. Using a JavaScript library such as jQuery is especially useful in this case since their are two different ways to do so, one specific to Internet Explorer and one defined by the w3c otherwise and adopted by every other browser vendor[1. attachEvent in Internet Explorer and addEventListener]. jQuery offers an unified way to deal with those two different implementations:

[code lang="js"]
jQuery("#myButton").click( function(event) {
alert("My button has been clicked");
// And later...
jQuery("#myButton").click( function(event) {
alert("And this alert will also be displayed");

Event bubbling

Another useful characteristic of events in Web pages is that most of them bubble in the DOM: once an event occurs on an element, it will also occur on its direct ancestor, then its ancestor’s ancestor until it hits the document root or an event listener prevents it to bubble. For example, with the following unordered list and script:

[code lang="html"]
<ul id="myUnorderedList">
<li class="blue">A first <i>list item</i>.</li>
<li class="red">A second <i>list item</i>.</li>

[code lang="js"]
jQuery("#myUnorderedList").click( function(event) {
alert("The list has been clicked");

No matter where the user clicks in the list, being on a list-item, an element nested in a list-item or in the gap between two list-items, the click event will still be detected by the listener set at the list level (on the <ul> tag). The event parameter that is passed to the function bound to the item carries information about the exact place where the click event occurred in its target property:

[code lang="js"]
jQuery("#myUnorderedList").click( function(event) {
alert("You have exactly clicked a " + + " element.");
// will be I, LI or UL depending where the cursor was.

This is especially useful to avoid adding an event listener for each item of the list.

Event delegation

Using the target attribute, it is also possible to know in which particular <li> did the click occurred, even if the click occurred on an element nested in the <li>:

[code lang="js"]
jQuery("#myUnorderedList").click( function(event) {
var target =;
// If the element clicked was nested in the li,
// it is necessary to loop through its ancestors
// until a li is found.
// The second test ends the loop when no li is found.
while(target.nodeName != "LI" && target.ownerDocument)
target = target.parentNode;
// If a li has actually been found...
if(target.nodeName == "LI")
alert("The colour of this li is " + target.className);
// target.className will be either "red" or "blue"

Another advantage of event delegation, beside requiring to add an event listener only once, is that its logic will still work for any item appended later to the list. In dynamic web pages where additional content can be loaded arbitrary (by an ajax request for example), this proves to be an invaluable mechanism.

jQuery1.3 helpers

The 1.3 version of jQuery introduce two helpers which make event delegation a breeze:

the .closest( selector ) function

This function is an equivalent to the loop that has been written in the previous code snippet to look for a specific ancestor of an event target. The equivalent of the above code with the .closest() is as short as:

[code lang="js"]
jQuery("#myUnorderedList").click( function(event) {
var $target = $("li");
// if a li has actually been found
alert("The colour of this li is " + $target.attr("class"));

The function can take any type of CSS selector as argument, in a pure jQuery fashion:

[code lang="js"]
jQuery("#myUnorderedList").click( function(event) {
var $target = $("");
alert("The colour of this li is blue");

The upcoming jQuery1.3.3 adds a second optional argument to closest, the context. Previously, when an element clicked was not nested in an element matching the selector, every ancestor of the target where still tested up to the root of the page. It is now possible to end the search for a target ancestor when a given element is hit. For example, in the previous snippet of code, there is no need to search for a list-item in the unordered list’s ancestors. In most case, this will be as simple as using the context of the event listener itself:

[code lang="js"]
jQuery("#myUnorderedList").click( function(event) {
var $target = $("", this);
alert("The colour of this li is blue");

the .live() function

This function has to be used just after a DOM query to filter events occurring on the matched set of elements, even if other elements matching the selector are added later to the document.

[code lang="js"]
jQuery("").live().click( function(event) {
alert("The colour of this li is blue");

Although this might seem like a magic and somewhat obscure way of achieving event delegation, the underlying principle is actually exactly the same as the one used in the previous code snippet. Once again, it is possible to improve the performance of this function by providing a context to the selector. In this case the context of the jQuery object is used:

[code lang="js"]
.live("click", function(event){ ... } );
// or
jQuery("", jQuery("#myUnorderedList"))
.live("click" function(event){ ... } );

It is important to note that both .closest() and .live() will only be useful for events which actually bubble in the DOM. Events such as focus, blur, mouseenter and mouseleave thus cannot be used with .live().

Dealing with mouseover/mouseout instead of mouseenter/mouseleave

mouseenter and mouseleave are events originally specific to Internet Explorer (but available in all browsers using jQuery) which do not bubble. This specificity is very useful to prevent listeners from being triggered by events happening inside nested elements. With the following list:

[code lang="html"]
<ul id="myUnorderedList">
<li>A first <i>item</i>.<em class="tips">Hidden info</em></li>
<li>A second <i>item</i>.<em class="tips">Hidden info</em></li>

Trying to display the hidden information only when the cursor is over a list-item could be written as follows:

[code lang="js"]
$("li").mouseover( function() {
}).mouseout( function() {

With such code, the hidden info would flickr every time the cursor is moved from or to the text wrapped in a <i>, because a mouseout event occurs even when the cursor is moved to a children of the previous element. Using mouseenter and mouseleave events effectively solves this problem. However this advantage makes event delegation impossible at the same time. The issue can actually be solved by combining the use of .live() on the mouseover and mouseout events with .closest() to filter out events for which the origin of the cursor is another element in the same <li>, using the relatedTarget attribute of the event:

[code lang="js"]
$("li").live("mouseover", function(event) {
var $related = $(event.relatedTarget).closest("li");
if(!$related.length || $related[0] != this)
$("li").live("mouseout", function(event) {
var $related = $(event.relatedTarget).closest("li");
if(!$related.length || $related[0] != this)

Using event delegation with mouseover and mouseout however has a non-negligible impact on performance, since a significant amount of code is run every time the cursor enters a nested element, just to check if the bound function should be executed. This issue and a possible improvement of jQuery’s code are detailed in a thread in jQuery’s Google group.


The helpers added in jQuery 1.3 make it really easy to implement event delegation and understanding how they work allows for more advanced problems to be easily tackled. As a general rule, it can be considered that an optimisation is always possible using event delegation wherever the same event listener is used for two different elements. An event listener should be bound only once, using event delegation if necessary. Using it with mouseover and mouseout should however be done carefully.

It is also recommended to read the documentation page of .live() to understand the limitations of this function. One might also want to know how to .unbind() events (.die() being the equivalent when using .live()).

The state of cross-browser vector graphics

Vector graphics have for long been the preserve of Flash on the Web as browser vendors never agreed on “one standard to rule them all”. But intrepid newcomers relying on browsers’ native features might well change this situation…

A bit of History

Microsoft first proposed their own standard to the w3c, VML, which was later merged with Adobe’s PGML to give birth to the SVG recommendation.

However, Microsoft chose to support vector graphics in their browser before the corresponding working group of the w3c had a chance to publish their first recommendation: VML was implemented in IE5, released in March 1999 while SVG1.0 was only published as a recommendation two years and a half later in September 2001. On the other hand, partial support of SVG was not to be found before Firefox1.5, released in 2005.

Even though VML was the first technology of its kind to be natively supported in a browser and although this browser could have taken pride of its monopolistic usage share for several years (above 90% between 2001 and 2005 according to different sources), it remains unclear why this technology never really took off… On the contrary, the fate of SVG, never supported by the dominant browser is much easier to understand.

A bit of Technic

From a Web developer perspective, it is arguable that the main advantage of VML lies in its deep integration in HTML:  it is possible to nest HTML elements in VML ones and vice versa. SVG was meant to be an independent specification, the result is that it has to be used in a specific namespace in HTML documents, and HTML can only be used in SVG when nested in a <foreignObject>, an element poorly supported in the earliest implementations.

However, SVG reveal all its power when it comes to transformations: rotations, translations, skewing and other scaling can be nested in any ways where different attributes, filters and computations have to be used in VML.

A bit of hope…

Meanwhile, Adobe’s claim of a 99% browser penetration of its proprietary but nonetheless fully vector based Flash technology obviously succeed to seduce a significantly larger number of developers. Nevertheless, two projects saw the light of day with the promise of providing a cross-browser vector graphic API relying on an abstraction layer above VML and SVG:

  • Dojox.gfx, an integral part of the dojo toolkit specialized in 2d graphics. Its main advantage is that it does provide “one API to rule them all“: its set of shapes, paths and transformations can be rendered either to VML, SVG, canvas (the scriptable raster graphic element of HTML5) or Silverlight (details follow). It is shipped with helpers to make elements movable and to render vector fonts, although its supposed negative impact on performance makes it only suitable for a limited amount of text (see the Caveats section of the page).
  • Raphael, a library agnostic (i.e. autonomous) library which offers a clean API mimicking jQuery’s one. It provides the same basic features as dojox.gfx together with animation, color manipulation, event handling, and vector fonts (as of version 0.8) right out of the box (animation and color manipulation are integrated in other parts of the dojo toolkit).

Both libraries can be used in a similar way: first, the area where the drawing will be possible is defined. In Raphael it is called the paper, in dojo.gfx it is the surface. The function that build this area returns a reference not on the physical DOM element that has just been created but on a more abstract JavaScript object that offers methods independent of the browser. The following kind of code is always to be found at the beginning of a script using on of those library:

[code lang="js"]
var area = createArea( positionOrParentElement, dimensions);

Using this reference, any shape or path can be created, once again those functions return an abstract reference on the created shape:

[code lang="js"]
var rectangle = area.createRect( position, dimensions);

And this reference can in turn be used to dynamically modify the attributes of the element, to animate it or to delete it:

[code lang="js"]
rectangle.setAttribute("fill", "red");

This kind of code should look familiar for any developer used to JavaScript libraries such as jQuery, prototype and of course dojo. There is however an important difference: in the latter libraries, it is possible to find an existing element in the DOM and use it to build an abstract object:

[code lang="js"]
var existingElement = library.domQuery("#myDiv");
existingElement.animate({'opacity': 0.5});

Using vector graphics, this abstract object is only created when the associated physical element is inserted in the DOM. The reference that is returned must be conserved as long as the element needs to be modified. This fact turns out to be important when doing event delegation. With a traditional library, modifying the opacity of all list-items of a list on mouse click could be achieved with the following code.

[code lang="js"]
// A function will be executed every time the user clicks on the list
library.domQuery("#myList").onClick(function(event) {

// The event object references the physical element that has been clicked
var physicalListItem =;

// The library can build an abstract object from this existing element
var abstractListItem = library.abstract(physicalListItem);

// This object can then be used to modify the properties of this element no matter the browser
abstractListItem.animate({'opacity': 0.5});

Unfortunately there is no such thing as

[code lang="js"]

But this can be easily circumnavigated by keeping a record of all abstract object associated in hash table with the id of the element:

[code lang="js"]
var abstractRect = area.createRect( position, dimension );
// Make sure the shape has an id
abstractRect.setAttribute(id, "myId");
// Store this reference associated with the id
var hash = { "myId": abstractRect };

area.onClick(function(event) {

// Event gives access to the id of the clicked shape
var shapeId =;

// Which can in turn be used to recover the abstract shape
var abstractShape = hash[shapeId];

The impact of such hash map on memory footprint is discussed in the paragraph about performances.

… and delusion

With the release of the 8th version of its Internet Explorer, Microsoft dealt VML a severe blow: several features appear to be broken, the most serious one being the possibility to use percentage units… What are vector graphics when the drawing cannot scale? It remains unclear whether those issues have been introduced unintentionally, what is certain is that it will help Microsoft to impose its new technology with vector graphics possibilities: Silverlight, a concurrent to Adobe’s Flash/Flex/Air Framwork also relying on a multi-browser plugin.

This inexplicable coincidence  leaves unfortunately only few hope for an happy ending to the epic battle of native vector graphics.

State of the art

Nevertheless, the baby should not be thrown out when Microsoft siphon off the bath water. It remains possible to use vector graphics in the browser as IE8 can be turned in compatibility mode to act just like the good ol’ IE7 used to. The two aforementioned vector graphic libraries still have to be compared… So be it:


In terms of feature richness, the advantage seems to be for dojox.gfx: As far as as shapes, paths and transformations are concerned, the two APIs are equivalent. Both libraries offer animations, color manipulations and event handling, although this is lies in a lower level part of dojo. However, the alternative technologies that dojo can use to render the graphics might give it a real benefit: Silverlight makes it a more future proof solution and canvas even allows a server-side rendering into raster graphics. Moreover, Raphael does not allow to use other units than pixels, this one being hard coded in the library.

File size

In term of file size however, analyzing the loading process of the two solutions with Firebug yields a size of approximately 22kB for Raphael (once minified and Gziped) and a cumulative 40kB for dojo (when loaded from AOL’s CDN). In one single file, Raphael ships all its utilities and both SVG and VML rendering code, whereas the utilities of dojox.gfx adopt a lazy loading approach with its utilities spread throughout inter-dependent files and a single rendering code being loaded according to the browser capabilities. Even with lazy loading, dojo is almost twice as big… This can be an important criteria for developers who do not intend to use the full capabilities of dojo or who have chosen another JavaScript toolkit. In the latter case, it can be noted that even Raphael will be duplicating some features commonly found in JavaScript libraries: animations, color manipulation and event handling.


In order to evaluate the impact of storing all references on abstract shapes, two tests were built: a thousand rectangles were created in a page and then in one case all the references were kept in a table to change the color of the rectangles, whereas in the other case no references were kept. It appears that storing the references has a negligible impact on memory footprint in Firefox3. However, in Internet Explorer 8, it turned out that Raphael systematically caused the browser to freeze. The test would complete only when it was limited to 500 shapes, but after 30 seconds of delay. The profiler shipped with IE8 revealed that the lag was mostly caused by the internal setBox method of Raphael that is specific to the VML code.

It should also be noted that Raphael does not offer event handling at the paper level. Event delegation is thus impossible when the library is used in standalone despite the benefit of this method on performances (even listeners are set only once on a container instead of being set for each elements).


Last but not least, the quality of the documentation can in many cases tip the scales.

Raphael offers a documentation of its API which includes useful code snippets, however some utilities for color manipulation such as Raphael.hsb2rgb(), Raphael.rgb2hsb() or Raphael.getRGB() are not yet documented. On the projects’ home page, numerous visually appealing and inspiring examples can be found such as a dynamic graph, a start point for a mind map and a demo of the vector fonts. It can be argued that an example with detailed code comments is lacking.

Dojo offers a complete and up to date reference of its API where the documentation of dojox.gfx can be found. However, developer not familiar with the dojo toolkit should read Dojo, Now With Drawing Tools to get started. This article also provide more advanced examples with appropriate comments.

Choice is yours

Dojox.gfx is a natural choice for any developers working with dojo while Raphael is more suitable as a standalone library thanks to its smaller file size. When a large number of shapes have to be created at once, the performances of Raphael should however be evaluated in Internet Explorer. The ability of dojox.gfx to render graphics using Silverlight or canvas can prove to be useful in some case.

For developers already using a JavaScript library other than dojo, it should be noted that in both cases, there will be some code redundancy. Moreover, if performances are critical, the file size of dojo and the problem of Raphael might even prevent to choose any of those…