HTML5 Rocks

HTML5 Rocks

Automating Web Performance Measurement

By Addy Osmani at

Web performance can have a huge impact on your entire user experience. If you’ve been looking at improving your own site’s perf lately, you’ve probably heard of PageSpeed Insights - a tool that analyzes pages and offers advice on how to make them faster based on best practices for mobile and desktop web performance.

PageSpeed’s scores are based on a number of factors, including how well your scripts are minimized, images optimized, content gzipped, tap targets being appropriately sized and landing page redirects avoided.

With 40% of people potentially abandoning pages that take more than 3 seconds to load, caring about how quickly your pages load on your users devices is increasingly becoming an essential part of our development workflow.

Performance metrics in your build process

Although manually going to the PageSpeed Insights to find out how your scores is fine, a number of developers have been asking whether it's possible to get similar performance scoring into their build process.

The answer is: absolutely.

Introducing PSI for Node

Today we’re happy to introduce PSI for Node - a new module that works great with Gulp, Grunt and other build systems and can connect up to the PageSpeed Insights service and return a detailed report of your web performance. Let’s look at a preview of the type of reporting it enables:

The results above are good for getting a feel for the type of improvements that could be made. For example, a 5.92 for sizing content to viewport means “some” work can still be done whilst a 24 for minimizing render blocking resources may suggest you need to defer loading of JS using the async attribute.

Lowering the barrier of entry to PageSpeed Insights

If you've tried using the PageSpeed Insights API in the past or attempted to use any of the tools we build on top of it, you probably had to register for a dedicated API key. We know that although this just takes a few minutes, it can be a turn off for getting Insights as part of your regular workflow.

We're happy to inform you that the PageSpeed Insights service supports making requests without an API key for up to 1 request every 5 seconds (plenty for anyone). For more regular usage or serious production builds, you'll probably want to register for a key.

The PSI module supports both a nokey option for getting it setup in less than a few minutes and the key option for a little longer. Details on how to register for an API key are documented.

Getting started

You have two options for how you integrate PSI into your workflow. You can either integrate it into your build process or run it as a globally installed tool on your system.

Build process

Using PSI in your Grunt or Gulp build-process is fairly straight-forward. If you’re working on a Gulp project, you can install and use PSI directly.


Install PSI using npm and pass --save-dev to save it to your package.json file.

npm install psi --save-dev

Then define a Gulp task for it in your gulpfile as follows (also covered in our Gulp sample project):

   var psi = require('psi');
   gulp.task('psi', function (cb) {
           nokey: 'true', // or use key: ‘YOUR_API_KEY’
           url: site,
           strategy: 'mobile',
       }, cb);

For the above, you can then run the task using:

gulp psi

And if you’re using Grunt, you can use grunt-pagespeed by James Cryer which now uses PSI to power it’s reporting.


npm install grunt-pagespeed --save-dev

Then load the task in your Gruntfile:


and configure it for use:

   pagespeed: {
     options: {
       nokey: true,
       url: "",
       strategy: "mobile"

You can then run the task using:

grunt pagespeed

Installing as a global tool

You can also install PSI as a globally available tool on your system. Once again, we can use npm to install the tool:

$ npm install -g psi

And via any terminal window, request PageSpeed Insights reports for a site (with the nokey option or an API specific key as follows):

psi --nokey --strategy=mobile

or for those with a registered API key:

psi --key=YOUR_API_KEY --strategy=mobile

That’s it!

Go forth and make performance part of your culture

We need to start thinking more about the impact of our designs and implementations on user experience.

Solutions like PSI can keep an eye on your web performance on desktop and mobile and are useful when used as part of your regular post-deployment workflow.

We're eager to hear of any feedback you might have and hope PSI helps improve the performance culture on your team.

Chrome Dev Summit: Performance Summary

By Paul Lewis at

#perfmatters: Tooling techniques for the performance ninja

Knowing your way around your development tools is key to becoming a performance Grand Master. Colt stepped through the three pillars of performance: network, compute and render, providing a tour of the key problem in each area and the tools available for finding and eradicating them.


  • You can now profile Chrome on Android with the DevTools you know and love from desktop.
  • The iteration loop for performance work is: gather data, achieve insight, take action.
  • Prioritize assets that are on the critical rendering path for your pages.
  • Avoid painting; it’s super expensive.
  • Avoid memory churn and executing code during critical times in your app.

#perfmatters: Optimizing network performance

Network and latency typically accounts for 70% of a site’s total page load time. That’s a large percentage, but it also means that any improvements you make there will reap huge benefits for your users. In this talk Ilya stepped through recent changes in Chrome that will improve loading time, as well as a few changes you can make in your environment to help keep network load to an absolute minimum.


  • Chrome M27 has a new and improved resource scheduler.
  • Chrome M28 has made SPDY sites (even) faster.
  • Chrome’s simple cache has received an overhaul.
  • SPDY / HTTP/2.0 offer huge transfer speed improvements. There are mature SPDY modules available for nginx, Apache and Jetty (to name just three).
  • QUIC is a new and experimental protocol built on top of UDP; it’s early days but however it works out users will win.

#perfmatters: 60fps layout and rendering

Hitting 60fps in your projects directly correlates to user engagement and is crucial to its success. In this talk Nat and Tom talked about Chrome’s rendering pipeline, some common causes of dropped frames and how to avoid them.


  • A frame is 16ms long. It contains JavaScript, style calculations, painting and compositing.
  • Painting is extremely expensive. A Paint Storm is where you unnecessarily repeat expensive paint work.
  • Layers are used to cache painted elements.
  • Input handlers (touch and mousewheel listeners) can kill responsiveness; avoid them if you can. Where you can’t keep them to a minimum.

#perfmatters: Instant mobile web apps

The Critical Rendering Path refers to anything (JavaScript, HTML, CSS, images) that the browser requires before it is able to begin painting the page. Prioritizing the delivery of assets on the critical rendering path is a must, particularly for users on network-constrained devices such as smartphones on cellular networks. Bryan talked through how the team at Google went through the process of identifying and prioritizing the assets for the PageSpeed Insights website, taking it from a 20 second load time to just over 1 second!


  • Eliminate render-blocking JavaScript and CSS.
  • Prioritize visible content.
  • Load scripts asynchronously.
  • Render the initial view server-side as HTML and augment with JavaScript.
  • Minimize render-blocking CSS; deliver only the styles needed to display the initial viewport, then deliver the rest.
  • Large data URIs inlined in render-blocking CSS are harmful for render performance; they are blocking resources where image URLs are non-blocking.

300ms tap delay, gone away

By Jake Archibald at

You'll find large articles throughout this site dedicated to shaving 10ms here and 90ms there in order to deliver a fast and fluid user experience. Unfortunately every touch-based mobile browser, across platforms, has an artificial ~300ms delay between you tapping a thing on the screen and the browser considering it a "click". When people think of the web as being sluggish compared to native apps on mobile, this is this one of the main contributors.

However, as of Chrome 32 for Android, which is currently in beta, this delay is gone for mobile-optimised sites, without removing pinch-zooming!

This optimisation applies to any site that uses:

<meta name="viewport" content="width=device-width">

(or any equivalent that makes the viewport <= device-width)

Why do clicks have a 300ms delay?

If you go to a site that isn't mobile optimised, it starts zoomed out so you can see the full width of the page. To read the content, you either pinch zoom, or double-tap some content to zoom it to full-width. This double-tap is the performance killer, because with every tap we have to wait to see if it might become a double-tap, and that wait is 300ms. Here's how it plays out:

  1. touchstart
  2. touchend
  3. Wait 300ms in case of another tap
  4. click

This pause applies to click events in JavaScript, but also other click-based interactions such as links and form controls.

You can't simply shortcut this with touchend listeners either. Compare these demos on a mobile browser other than Chrome 32:

Tapping on the rows changes their colour. The touchend example is much faster but also triggers after scrolling depending on the browser. This is because the spec doesn't define what can cancel the flow of touch events. Current versions of iOS Safari, Firefox, IE, and the old Android Browser trigger touchend after scrolling, Chrome doesn't.

Microsoft's PointerEvents spec does the right thing and specifies that pointerup doesn't trigger if a lower-level action such as scrolling occurs. However, currently only IE supports pointer events, although Chrome has a ticket for it. But even then, the 300ms delay would only be dropped on sites that used this listener in a way that applied to all links, form elements, and JavaScript interactions on the page.

How Chrome removed the 300ms delay

Chrome and Firefox for Android have, for some time now, removed the 300ms tap delay for pages with this:

<meta name="viewport" content="width=device-width, user-scalable=no">

Pages with this cannot be zoomed, therefore "double-tap to zoom" isn't an interaction, therefore there's no need to wait for double-taps. However, we also lose pinch-zooming.

Pinch-zooming is great for taking a closer look at a photo, some small print, or dealing with a set buttons/links that are placed too closely together. It's an out-of-the-box accessibility feature.

If a site has…

<meta name="viewport" content="width=device-width">

…double-tap zooms in a little bit. Not a particularly useful amount. A further double-tap zooms back out. We feel this feature, on mobile-optimised pages, isn't useful. So we removed it! This means we can treat taps as clicks instantly, but we retain pinch-zooming.

Is this change an accessibility concern?

We don't believe so, but the reason we release beta versions of Chrome is so users can try new features and give us feedback.

We tried to imagine a user this may affect, someone who:

  • has a motor impairment that prevents multi-touch interaction such as pinch-zoom, but not two taps in the same area within 300ms
  • has a minor visual impairment that is overcome by the small amount of zooming provided by double-tap on mobile optimised sites

But they're catered for by the text sizing tools in Chrome's settings, or the screen magnifier in Android, which covers all sites and native apps, and can be activated by triple-tap.

Chrome accessibility settingsAndroid screen magnification

However, we may have missed something, so if you are affected by this change, or know someone who is, let us know in the comments or file a ticket.

Will other browsers do the same?

I don't know, but I hope so.

Firefox has a ticket for it and currently avoids the 300ms delay for unzoomable pages.

On iOS Safari, double-tap is a scroll gesture on unzoomable pages. For that reason they can't remove the 300ms delay. If they can't remove the delay on unzoomable pages, they're unlikely to remove it on zoomable pages.

Windows phones also retain the 300ms delay on unzoomable pages, but they don't have an alternative gesture like iOS so it's possible for them to remove this delay as Chrome has. You can remove the delay using:

html {
    -ms-touch-action: manipulation;
    touch-action: manipulation;

Unfortunately this is a non-standard Microsoft extension to the pointer events spec. Also, programmatic fixes like this are opt-in by the developer, whereas the Chrome fix speeds up any existing mobile-optimised site.

In the mean time…

FastClick by FT Labs uses touch events to trigger clicks faster & removes the double-tap gesture. It looks at the amount your finger moved between touchstart and touchend to differentiate scrolls and taps.

Adding a touchstart listener to everything has a performance impact, because lower-level interactions such as scrolling are delayed by calling the listener to see if it event.preventDefault()s. Thankfully, FastClick will avoid setting listeners in cases where the browser already removes the 300ms delay, so you get the best of both!

Flexbox layout isn't slow

By Paul Irish at

TL;DR: Old flexbox (display: box) is 2.3x slower than new flexbox (display: flex).

A bit ago, Wilson Page wrote a great article for Smashing Magazine digging into how they brought the Financial Times webapp to life. In the article, Wilson notes:

As the app began to grow, we found performance was getting worse and worse.

We spent a good few hours in Chrome Developers Tools’ timeline and found the culprit: Shock, horror! — it was our new best friend, flexbox. The timeline showed that some layouts were taking close to 100 milliseconds; reworking our layouts without flexbox reduced this to 10 milliseconds!

Wilson's comments were about the original (legacy) flexbox that used display: box;. Unfortunately they never got a chance to answer if the newer flexbox (display: flex;) was faster, but over on CSS Tricks, Chris Coyier opened that question.

We asked Ojan Vafai, who wrote much of the implementation in WebKit & Blink, about the newer flexbox model and implementation.

The new flexbox code has a lot fewer multi-pass layout codepaths. You can still hit multi-pass codepaths pretty easily though (e.g. flex-align: stretch is often 2-pass). In general, it should be much faster in the common case, but you can construct a case where it's equally as slow.

That said, if you can get away with it, regular block layout (non-float), will usually be as fast or faster than new flexbox since it's always single-pass. But new flexbox should be faster than using tables or writing custom JS-base layout code.

To see the difference in numbers, I made a head-to-head comparison of old v new syntax.

Old v New Flexbox Benchmark

  • old flexbox vs new flexbox (with fallback)
  • 500 elements per page
  • evaluating page load cost to lay out the elements
  • averaged across 3 runs each
  • measured on desktop. (mobile will be ~10x slower)

Old flexbox: layout costs of ~43.5ms

New flexbox: layout costs of ~18.2ms

Summary: Old is 2.3x slower than new.

What should you do?

When using flexbox, always author for the new stuff: the IE10 tweener syntax and the new updated flexbox that’s in Chrome 21+, Safari 7+, Firefox 22+, Opera (& Opera Mobile) 12.1+, IE 11+, and Blackberry 10+. In many cases you can do a fallback to the legacy flexbox to pick up some older mobile browsers.


  • I also ran the benchmark using display:table-cell and it hit 30ms, right between the two flexbox implementations.
  • The benchmarks above only represent the Blink & WebKit side of things. Due to the time of implementation, flexbox is nearly identical across Safari, Chrome & Android.

Remember: Tools, not rules

What’s more important is optimizing what matters. Always use the timeline to identify your bottlenecks before spending time optimizing one sort of operation.

In fact, we've connected with Wilson and the Financial Times Labs team and, as a result, improved the Chrome DevTools coverage of layout performance tooling. We'll soon be adding the ability to view the relayout boundary of an element, and Layout events in the timeline are loaded with details of the scope, root, and cost of each layout:

Profiling Long Paint Times with DevTools' Continuous Painting Mode

By Paul Irish at

Continuous painting mode for paint profiling is now available in Chrome Canary. This article explains how you identify a problem in page painting time and how you can use this new tool to detect bottlenecks in painting performance.

Investigating painting time on your page

So you noticed that your page doesn't scroll smoothly. This is how you would start tackling the problem. For our example, we'll use the demo page Things We Left On The Moon by Dan Cederholm as our example.

You open the Web Inspector, start a Timeline recording and scroll your page up and down. Then you look at the vertical timelines, that show you what happened in each frame.

If you see that most time is spent painting (big green bars above 60fps), you need to take a closer look at why this is happening. To investigate your paints, use the Show paint rectangles setting of the Web Inspector (cog icon in the bottom right corner of the Web Inspector). This will show you the regions where Chrome paints.

There are different reasons for Chrome to repaint areas of the page:

  • DOM nodes get changed in JavaScript, which causes Chrome to recalculate the layout of the page.
  • Animations are playing that get updated in a frame-based cycle.
  • User interaction, like hovering, causes style changes on certain elements.
  • Any other operation that causes the page layout to change.

As a developer you need to be aware of the repaints happening on your page. Looking at the paint rectangles is a great way of doing that. In the example screenshot above you can see that the whole screen is covered in a big paint rectangle. This means the whole screen is repainted as you scroll, which is not good. In this specific case this is caused by the CSS style background-attachment:fixed which causes the background image of the page to stay at the same position while the content of the page moves on top of it as you scroll.

If you identify that the repaints cover a big area and/or take a long time, you have two options:

  1. You can try to change the page layout to reduce the amount of painting. If possible Chrome paints the visible page only once and adds parts that have not been visible as you scroll down. However, there are cases when Chrome needs to repaint certain areas. For example the CSS rule position:fixed, which is often used for navigation elements that stay in the same position, can cause these repaints.

  2. If you want to keep your page layout, you can try to reduce the painting cost of the areas that get repainted. Not every CSS style has the same painting cost, some have little impact, others a lot. Figuring out the painting costs of certain styles can be a lot of work. You can do this by toggling styles in the Elements panel and looking at the difference in the Timeline recording, which means switching between panels and doing lots of recordings. This is where continuous painting mode comes into play.

Continuous painting mode

Continuous painting mode is a tool that helps you identify which elements are costly on the page. It puts the page into an always repainting state, showing a counter of how much painting work is happening. Then, you can hide elements and mutate styles, watching the counter, in order to figure out what is slow.


In order to use continuous painting mode you need to use Chrome Canary.

On Linux systems (and some Macs) you need to make sure that Chrome runs in compositing mode. This can be permanently enabled using the GPU compositing on all pages setting in about:flags.

How To Begin

Continuous painting mode can be enabled via the checkbox Enable continuous page repainting in the Web Inspector's settings (cog icon in the bottom right corner of the Web Inspector).

The small display in the top right corner shows you the measured paint times in milliseconds. More specifically it shows:

  • The last measured paint time on the left.
  • The minimum and maximum of the current graph on the right.
  • A bar chart displaying the history of the last 80 frames on the bottom (the line in the chart indicates 16ms as a reference point).

The paint time measurements are dependent on screen resolution, window size and the hardware Chrome is running on. Be aware that these things are likely to be different for your users.


This is how you can use continuous painting mode to track down elements and styles that add a lot of painting cost:

  1. Open the Web Inspector's settings and check Enable continuous page repainting.
  2. Go to the Elements panel and traverse the DOM tree with the arrow keys or by picking elements on the page.
  3. Use the H keyboard shortcut, a newly introduced helper, to toggle visibility on an element.
  4. Look at the paint time graph and try to spot an element that adds a lot of painting time.
  5. Go through the CSS styles of that element, toggling them on and off while looking at the graph, to find the style that causes the slow down.
  6. Change this style and do another Timeline recording to check if this made your page perform better.

The animation below shows toggling styles and its affect on paint time:

continuouspaint screencast

This example demonstrates how turning either one of the CSS styles box-shadow or border-radius off, reduces the painting time by a big amount. Using both box-shadow andborder-radius on an element leads to very expensive painting operations, because Chrome can't optimize for this. So if you have an element that gets a lot of repaints, like in the example, you should avoid this combination.


Continuous painting mode repaints the whole visible page. This is usually not the case when browsing a web page. Scrolling usually only paints the parts that haven't been visible before. And for other changes on the page, only the smallest possible area is repainted. So check with another Timeline recording if your style improvements actually had an impact on the paint times of your page.

When using continuous painting mode you might discover that e.g. the CSS styles border-radius and box-shadow add a lot of painting time. It is not discouraged to use those features in general, they are awesome and we are happy they are finally here. But it's important to know when and where to use them. Avoid using them in areas with lots of repaints and avoid overusing them in general.

Learn more about painting and related topics on

Live Demo

Click below for a demo where Paul Irish uses continuous painting to identify a uniquely expensive paint operation.

Stick your landings! position: sticky lands in WebKit

By Eric Bidelman at

position: sticky is a new way to position elements and is conceptually similar to position: fixed. The difference is that an element with position: sticky behaves like position: relative within its parent, until a given offset threshold is met in the viewport.

Use cases

Paraphrasing from Edward O’Connor's original proposal of this feature:

Many web sites have elements that alternate between being in-flow and having position: fixed, depending on the user's scroll position. This is often done for elements in a sidebar that the page author wants to be always visible as the user scrolls, but which slot into a space on the page when scrolled to the top. Good examples are (the "Top Stories" sidebar) and (search results map).

Introducing sticky positioning


By simply adding position: sticky (vendor prefixed), we can tell an element to be position: relative until the user scrolls the item (or its parent) to be 15px from the top:

.sticky {
  position: -webkit-sticky;
  position: -moz-sticky;
  position: -ms-sticky;
  position: -o-sticky;
  top: 15px;

At top: 15px, the element becomes fixed.

To illustrate this feature in a practical setting, I've put together a DEMO which sticks blog titles as you scroll.

Old approach: scroll events

Until now, to achieve the sticky effect, sites setup scroll event listeners in JS. We actually use this technique as well on html5rocks tutorials. On screens smaller than 1200px, our table of contents sidebar changes to position: fixed after a certain amount of scrolling.

Here's the (now old way) to have a header that sticks to the top of the viewport when the user scrolls down, and falls back into place when the user scrolls up:

.sticky {
  position: fixed;
  top: 0;
.header {
  width: 100%;
  background: #F6D565;
  padding: 25px 0;

<div class="header"></div>

var header = document.querySelector('.header');
var origOffsetY = header.offsetTop;

function onScroll(e) {
  window.scrollY >= origOffsetY ? header.classList.add('sticky') :

document.addEventListener('scroll', onScroll);

Try it:

This is easy enough, but this model quickly breaks down if you want to do this for a bunch of DOM nodes, say, every <h1> title of a blog as the user scrolls.

Why JS is not ideal

In general, scroll handlers are never a good idea. Folks tend to do too much work and wonder why their UI is janky.

Something else to consider is that more and more browsers are implementing hardware accelerated scrolling to improve performance. The problem with this is that on JS scroll handlers are in play, browsers may fall back into a slower (software) mode. Now we're no longer running on the GPU. Instead, we're back on the CPU. The result? User's perceive more jank when scrolling your page.

Thus, it makes a lot of sense to have such feature be declarative in CSS.


Unfortunately, there isn't a spec for this one. It was proposed on www-style back in June and just landed in WebKit. That means there's no good documentation to point to. One thing to note however, according to this bug, if both left and right are specified, left wins. Likewise, if top and bottom are used at the same time, top wins.

Support right now is Chrome 23.0.1247.0+ (current Canary) and WebKit nightly.

When milliseconds are not enough:

By Paul Irish at

The High Resolution Timer was added by the WebPerf Working Group to allow measurement in the Web Platform that's more precise than what we've had with +new Date and the newer

So just to compare, here are the sorts of values you'd get back:         //  1337376068250  //  20303.427000007

You'll notice the two above values are many orders of magnitude different. is a measurement of floating point milliseconds since that particular page started to load (the performance.timing.navigationStart timeStamp to be specific). You could argue that it could have been the number of milliseconds since the unix epoch, but rarely does a web app need to know the distance between now and 1970. This number stays relative to the page because you'll be comparing two or more measurements against eachother.

Monotonic time

Another added benefit here is that you can rely on the time being monotonic. Let's let WebKit engineer Tony Gentilcore explain this one:

Perhaps less often considered is that Date, based on system time, isn't ideal for real user monitoring either. Most systems run a daemon which regularly synchronizes the time. It is common for the clock to be tweaked a few milliseconds every 15-20 minutes. At that rate about 1% of 10 second intervals measured would be inaccurate.

Use Cases

There are a few situations where you'd use this high resolution timer instead of grabbing a basic timestamp:

  • benchmarking
  • game or animation runloop code
  • calculating framerate with precision
  • cueing actions or audio to occur at specific points in an animation or other time-based sequence


The high resolution timer is currently available in Chrome (Stable) as window.performance.webkitNow(), and this value is generally equal to the new argument value passed into the requestAnimationFrame callback. Pretty soon, WebKit will drop its prefix and this will be available through The WebPerfWG in particular, led by Jatinder Mann of Microsoft, has been very successful in unprefixing their features quite quickly.

In summary, is...

  • a double with microseconds in the fractional
  • relative to the navigationStart of the page rather than to the UNIX epoch
  • not skewed when the system time changes
  • available in Chrome stable, Firefox 15+, and IE10.

How to convert ArrayBuffer to and from String

By Renato Mangini at

ArrayBuffers are used to transport raw data and several new APIs rely on them, including WebSockets, Web Intents, XMLHttpRequest version 2 and WebWorkers. However, because they recently landed in the JavaScript world, sometimes they are misinterpreted or misused.

Semantically, an ArrayBuffer is simply an array of bytes viewed through a specific mask. This mask, an instance of ArrayBufferView, defines how bytes are aligned to match the expected structure of the content. For example, if you know that the bytes in an ArrayBuffer represent an array of 16-bit unsigned integers, you just wrap the ArrayBuffer in a Uint16Array view and you can manipulate its elements using the brackets syntax as if the Uint16Array was an integer array:

       // suppose buf contains the bytes [0x02, 0x01, 0x03, 0x07]
       // notice the multibyte values respect the hardware endianess, which is little-endian in x86
       var bufView = new Uint16Array(buf);
       if (bufView[0]===258) {   // 258 === 0x0102
       bufView[0] = 255;    // buf now contains the bytes [0xFF, 0x00, 0x03, 0x07]
       bufView[0] = 0xff05; // buf now contains the bytes [0x05, 0xFF, 0x03, 0x07]
       bufView[1] = 0x0210; // buf now contains the bytes [0x05, 0xFF, 0x10, 0x02]

One common practical question about ArrayBuffer is how to convert a String to an ArrayBuffer and vice-versa. Since an ArrayBuffer is, in fact, a byte array, this conversion requires that both ends agree on how to represent the characters in the String as bytes. You probably have seen this "agreement" before: it is the String's character encoding (and the usual "agreement terms" are, for example, Unicode UTF-16 and iso8859-1). Thus, supposing you and the other party have agreed on the UTF-16 encoding, the conversion code could be something like:

     function ab2str(buf) {
       return String.fromCharCode.apply(null, new Uint16Array(buf));
    function str2ab(str) {
       var buf = new ArrayBuffer(str.length*2); // 2 bytes for each char
       var bufView = new Uint16Array(buf);
       for (var i=0, strLen=str.length; i<strLen; i++) {
         bufView[i] = str.charCodeAt(i);
       return buf;

Note the use of Uint16Array. This is an ArrayBuffer view that aligns bytes of the ArrayBuffers as 16-bit elements. It doesn't handle the character encoding itself, which is handled as Unicode by String.fromCharCode and str.charCodeAt.

Note: A robust implementation of the String to ArrayBuffer conversion capable of handling more encodings is provided by the stringencoding library. But, for simple usage where you control both sides of the communication pipe, the code above is probably enough. A standardized API specification for String encoding is being drafted by the WHATWG working group.

A popular StackOverflow question about this has a highly voted answer with a somewhat convoluted solution to the conversion: create a FileReader to act as a converter and feed a Blob containing the String into it. Although this method works, it has poor readability and I suspect it is slow. Since unfounded suspicions have driven many mistakes in the history of humanity, let's take a more scientific approach here. I have jsperf'ed the two methods and the result confirms my suspicion:

In Chrome 20, it is almost 27 times faster to use the direct ArrayBuffer manipulation code on this article than it is to use the FileReader/Blob method.

requestAnimationFrame API: now with sub-millisecond precision

By Paul Irish at

If you've been using requestAnimationFrame you've enjoyed seeing your paints synchronized to the refresh rate of the screen, resulting in the most high-fidelity animations possible. Plus, you're saving your users CPU fan noise and battery-power when they switch to another tab.

There is about to be a change to part of the API, however. The Timestamp that is passed into your callback function is changing from a typical timestamp to a high-resolution measurement of floating point milliseconds since the page was opened. If you use this value, you will need to update your code, based on the explanation below.

Just to be clear, here is what I'm talking about:

   // assuming requestAnimationFrame method has been normalized for all vendor prefixes..
       // the value of timestamp is changing

If you're using the common requestAnimFrame shim provided here, then you're not using the timestamp value. You're off the hook. :)


Why? Well rAF helps you get the ultimate 60 fps that is ideal, and 60 fps translates to 16.7ms per frame. But measuring with integer milliseconds means we have a precision of 1/16 for everything we want to observe and target.

As you can see above, the blue bar represents the maximum amount of time you have to do all your work before you paint a new frame (at 60fps). You're probably doing more than 16 things, but with integer milliseconds you only have the ability to schedule and measure in those very chunky increments. That's not good enough.

The High Resolution Timer solves this by providing a far more precise figure:         //  1337376068250  //  20303.427000007

The high resolution timer is currently available in Chrome as window.performance.webkitNow(), and this value is generally equal to the new argument value passed into the rAF callback. Once the spec progresses through standards further, the method will drop the prefix and be available through

You'll also notice the two above values are many orders of magnitude different. is a measurement of floating point milliseconds since that particular page started to load (the performance.navigationStart to be specific).

In use

The key issue that crops is animation libraries that use this design pattern:

   function MyAnimation(duration) {
      this.startTime =;
      this.duration = duration;
   MyAnimation.prototype.tick = function(time) {
      var now =;
      if (time > now) {

An edit to fix this is pretty easy... augment the startTime and now to use

   this.startTime = ?
                    ( + performance.timing.navigationStart) : 

This is a fairly naive implementation, it doesn't use a prefixed now() method and also assumes support, which isn't in IE8.

Feature detection

If you're not using the pattern above and just want to identify which sort of callback value you're getting you can use this technique:


if (timestamp < 1e12){ // .. high resolution timer } else { // integer milliseconds since unix epoch }

// ...

Checking if (timestamp < 1e12) is a quick duck test to see how big of a number we're dealing with. Technically it could false positive but only if a webpage is open continuously for 30 years. But we're not able to test if it's a floating point number (rather than floored to an integer). Ask for enough high resolution timers and you're bound to get integer values at some point.

We plan on pushing this change out in Chrome 21, so if you're already taking advantage of this callback parameter, be sure to update your code!

Big boost to DOM performance - WebKit's innerHTML is 240% faster

By Sam Dutton at

We're very happy to see that some common DOM operations have just skyrocketed in speed. The changes were at the WebKit level, boosting performance for both Safari (JavaScriptCore) and Chrome (V8).

Chrome Engineer Kentaro Hara made seven code optimisations within WebKit; below are the results, which show just how much faster JavaScript DOM access has become:

DOM performance boosts summary

Below, Kentaro Hara gives details on some of the patches he made. The links are to WebKit bugs with test cases, so you can try out the tests for yourself. The changes were made between WebKit r109829 and r111133: Chrome 17 does not include them; Chrome 19 does.

Improve performance of div.innerHTML and div.outerHTML by 2.4x (V8, JavaScriptCore)

Previous behavior in WebKit:

  1. Create a string for each tag.
  2. Append a created string to Vector<string>, parsing the DOM tree.
  3. After the parsing, allocate a string whose size is the sum of all strings in the Vector<string>.
  4. Concatenate all strings in Vector<string>, and return it as innerHTML.

New behavior in WebKit:

  1. Allocate one string, say S.
  2. Concatenate a string for each tag to S, incrementally parsing the DOM tree.
  3. Return S as innerHTML.

In a nutshell, instead of creating a lot of strings and then concatenating them, the patch creates one string and then simply append strings incrementally.

Improve performance of div.innerText and div.outerText in Chromium/Mac by 4x (V8/Mac)

The patch just changed the initial buffer size for creating innerText. Changing the initial buffer size from 2^16 to 2^15 improved Chromium/Mac performance by 4x. This difference depends on the underlying malloc system.

Improve performance of CSS property accesses in JavaScriptCore by 35%

(Note: This is a change for Safari, not for Chrome.)

A CSS property string (e.g. .fontWeight, .backgroundColor) is converted to an integer ID in WebKit. This conversion is heavy. The patch caches the conversion results in a map (i.e. a property string => an integer ID), so that the conversion won't be conducted multiple times.

How do the tests work?

They measure the time of property accesses. In case of innerHTML (the performance test in, the test just measures the time to run the following code:

for (var i = 0; i < 1000000; i++)

The performance test uses a large body copied from the HTML spec.

Similarly, the CSS property-accesses test measures the time of the following code:

var spanStyle =;
for (var i = 0; i < 1000000; i++) {

The good news is that Kentaro Hara believes more performance improvements will be possible for other important DOM attributes and methods.

Bring it on!

Kudos to Haraken and the rest of the team.