HTML5 Rocks

HTML5 Rocks

DevTools Digest December 2013

By Umar Hansa at

A number of updated features have made it into the Chrome DevTools recently, some small, some big. We'll start out with the Element panel's updates and move on to talk about Console, Timeline, and more.

Disabled style rules copy as commented out

Copying entire CSS rules in the Styles pane will now include styles you toggled off, they will exist in your clipboard as commented out. []

Copy as CSS path

‘Copy as CSS path’ is now available as a menu item for DOM nodes in the Elements panel (similar to the Copy XPath menu item).

image alt text

Generation of CSS selectors do not have to be limited to your stylesheets/JavaScript, they can also be alternatives for locator strategies in WebDriver tests. []

View Shadow DOM element styles

Child elements of a shadow root can now have their styles inspected. []

Console's copy() works for objects

The copy() method from the Command Line API now works for objects. Go ahead and try copy({foo:'bar'}) into the Console panel and notice how a stringified & formatted version of the object is now in your clipboard. []

Regex filter for console

Filter console messages using regular expressions in the Console panel. []

Easily remove event listeners

Try getEventListeners(document).mousewheel[0]; in the Console panel to retrieve the first mousewheel event listener on the document. Carrying on from this, try $_.remove(); to remove that event listener ($_ = value of the most recently evaluated expression). []

Removal of CSS warnings

Those "Invalid CSS property value"-style warnings you might have seen are now removed. There are ongoing efforts into making the implementation more robust against real world CSS including browser hacks. []

Timeline operations summarized in pie chart

The Timeline panel now contains a pie chart in the Details pane which visually shows the source of your rendering costs - this helps you identify your bottlenecks at a glance.

You’ll find that much of the information which used to be displayed in popovers has now been promoted to its own pane. To view, start a Timeline recording and select a frame, take note of the new Details pane which contains a pie chart. When in Frames view, you’ll get interesting stats like average FPS (1000ms/frame duration) for the selected frame(s). []

Image resize event details

Image resize and decode events in the Timeline panel now contain a link to the DOM node in the Elements panel.

image alt text

The Image URL link takes you to the corresponding resource in the Resources Panel. []

GPU Frames

Frames occurring on the GPU are now shown at the top, above frames on the main thread. []

Break on popstate listeners

'popstate' is now available as an event listener breakpoint in the Sources panel sidebar. []

Rendering settings available in the drawer

Opening the drawer now presents a number of panes, one of which is the Rendering panel, use it to show paint rectangles, FPS meter etc. This is enabled by default at Settings > "Show 'Rendering' view in console drawer"

Copy image as data URL

Image assets in the Resources panel can now have their contents copied as a data URI (data:image/png;base64,iVBO...).

To try this out, find the image resource within Frames > [Resource] > Images and right click on the image preview to access the context menu, then select ‘Copy Image as Data URL’. []

Data URI filtering

If you've never thought they belong, Data URIs can now be filtered out of the Network tab. Select the Filter icon image alt text to view other resource filter types. []

image alt text

Network Timing bugs fixed

If you saw your image apparently taking 300,000 years to download, our apologies. ;) These incorrect timings for network resources have now been fixed. []

Network recording behavior has more control

The behavior of recording network is a little different. First, the record button acts just like you would expect from Timeline or a CPU profile. And because you'd expect it, if you reload the page while DevTools is open, network recording will automatically start. It'll then turn off, so if you want to capture network activity after page load, turn it on. This makes it easier to visualize your waterfall without late-breaking network requests skew the results. []

DevTools themes now available through extensions

User stylesheets are now available through DevTools Experiments (checkbox: "Allow custom UI themes") which allows a Chrome extension to apply custom styling to DevTools. See Sample DevTools Theme Extension for an example. []

That’s it for this edition of the DevTools digest, if you haven’t already, check out the November edition.

New Web Animations engine in Blink drives CSS Animations & Transitions

By Alex Danilo at

Users expect smooth 60fps animations in modern multi-device UIs. Achieving that level of performance with the web’s current animation primitives can be difficult. Fortunately we’re working on a new Blink animation implementation that just shipped in Chrome Canary!

What’s exciting about this is that it simplifies the internals of Blink and lays the groundwork for inclusion of new API features from the Web Animations 1.0 specification.

Until now, CSS Animations and CSS Transitions had been separate implementations, written independently, that didn’t necessarily play well together. For the past few years, browser implementers have been working together on a next-generation animation model with support for things like synchronization, chaining animations to run in sequence, seeking to arbitrary points in animation time, allowing animations to change speed, reverse and more.

The effort led to the formation of the W3C specification Web Animations 1.0.

The first step from the Blink team in getting Web Animations out into the world is replacing the existing Blink CSS Animations/Transitions C++ implementation with the Web Animations engine. Having reached that milestone now, we’d like as many developers as possible to check nothing’s been broken and more importantly to keep an eye on the implementation effort and give us feedback on what’s good/bad or might need changing.

Next up will be implementation of an API that lets you create, modify, and interrogate animations from JavaScript. The API is designed to let animations run efficiently (by using declarative semantics so Javascript manages creating animations but hands off control to the browser) whilst still exposing full animation control to the JavaScript developer.

We’re looking for active feedback on the proposed API to make sure we haven’t missed any features needed for powerful animation control. As with any new feature, the specification will continue to change, so now is the time to make your voice heard – ideally by subscribing to and contributing to the mailing list (and put [Web Animations] in the subject line so it gets noticed).

Try out the new engine that’s already powering CSS Animations & Transitions now and post any weirdness to the Chromium bug tracker so we know about it.

We’re excited to bring next-generation animation capabilities to Blink and look forwarding to working with other browser developers like WebKit and Mozilla who’ve also committed to implementing the new model.

300ms tap delay, gone away

By Jake Archibald at

You'll find large articles throughout this site dedicated to shaving 10ms here and 90ms there in order to deliver a fast and fluid user experience. Unfortunately every touch-based mobile browser, across platforms, has an artificial ~300ms delay between you tapping a thing on the screen and the browser considering it a "click". When people think of the web as being sluggish compared to native apps on mobile, this is this one of the main contributors.

However, as of Chrome 32 for Android, which is currently in beta, this delay is gone for mobile-optimised sites, without removing pinch-zooming!

This optimisation applies to any site that uses:

<meta name="viewport" content="width=device-width">

(or any equivalent that makes the viewport <= device-width)

Why do clicks have a 300ms delay?

If you go to a site that isn't mobile optimised, it starts zoomed out so you can see the full width of the page. To read the content, you either pinch zoom, or double-tap some content to zoom it to full-width. This double-tap is the performance killer, because with every tap we have to wait to see if it might become a double-tap, and that wait is 300ms. Here's how it plays out:

  1. touchstart
  2. touchend
  3. Wait 300ms in case of another tap
  4. click

This pause applies to click events in JavaScript, but also other click-based interactions such as links and form controls.

You can't simply shortcut this with touchend listeners either. Compare these demos on a mobile browser other than Chrome 32:

Tapping on the rows changes their colour. The touchend example is much faster but also triggers after scrolling depending on the browser. This is because the spec doesn't define what can cancel the flow of touch events. Current versions of iOS Safari, Firefox, IE, and the old Android Browser trigger touchend after scrolling, Chrome doesn't.

Microsoft's PointerEvents spec does the right thing and specifies that pointerup doesn't trigger if a lower-level action such as scrolling occurs. However, currently only IE supports pointer events, although Chrome has a ticket for it. But even then, the 300ms delay would only be dropped on sites that used this listener in a way that applied to all links, form elements, and JavaScript interactions on the page.

How Chrome removed the 300ms delay

Chrome and Firefox for Android have, for some time now, removed the 300ms tap delay for pages with this:

<meta name="viewport" content="width=device-width, user-scalable=no">

Pages with this cannot be zoomed, therefore "double-tap to zoom" isn't an interaction, therefore there's no need to wait for double-taps. However, we also lose pinch-zooming.

Pinch-zooming is great for taking a closer look at a photo, some small print, or dealing with a set buttons/links that are placed too closely together. It's an out-of-the-box accessibility feature.

If a site has…

<meta name="viewport" content="width=device-width">

…double-tap zooms in a little bit. Not a particularly useful amount. A further double-tap zooms back out. We feel this feature, on mobile-optimised pages, isn't useful. So we removed it! This means we can treat taps as clicks instantly, but we retain pinch-zooming.

Is this change an accessibility concern?

We don't believe so, but the reason we release beta versions of Chrome is so users can try new features and give us feedback.

We tried to imagine a user this may affect, someone who:

  • has a motor impairment that prevents multi-touch interaction such as pinch-zoom, but not two taps in the same area within 300ms
  • has a minor visual impairment that is overcome by the small amount of zooming provided by double-tap on mobile optimised sites

But they're catered for by the text sizing tools in Chrome's settings, or the screen magnifier in Android, which covers all sites and native apps, and can be activated by triple-tap.

Chrome accessibility settingsAndroid screen magnification

However, we may have missed something, so if you are affected by this change, or know someone who is, let us know in the comments or file a ticket.

Will other browsers do the same?

I don't know, but I hope so.

Firefox has a ticket for it and currently avoids the 300ms delay for unzoomable pages.

On iOS Safari, double-tap is a scroll gesture on unzoomable pages. For that reason they can't remove the 300ms delay. If they can't remove the delay on unzoomable pages, they're unlikely to remove it on zoomable pages.

Windows phones also retain the 300ms delay on unzoomable pages, but they don't have an alternative gesture like iOS so it's possible for them to remove this delay as Chrome has. You can remove the delay using:

html {
    -ms-touch-action: manipulation;
    touch-action: manipulation;

Unfortunately this is a non-standard Microsoft extension to the pointer events spec. Also, programmatic fixes like this are opt-in by the developer, whereas the Chrome fix speeds up any existing mobile-optimised site.

In the mean time…

FastClick by FT Labs uses touch events to trigger clicks faster & removes the double-tap gesture. It looks at the amount your finger moved between touchstart and touchend to differentiate scrolls and taps.

Adding a touchstart listener to everything has a performance impact, because lower-level interactions such as scrolling are delayed by calling the listener to see if it event.preventDefault()s. Thankfully, FastClick will avoid setting listeners in cases where the browser already removes the 300ms delay, so you get the best of both!

The Yeoman Monthly Digest #1

By Addy Osmani at

Allo’ Allo’. Welcome to the first issue of the Yeoman monthly digest – a regular round-up of community articles, videos and talks to help you stay on top of what’s new with your favourite man-in-a-hat. We hope you find the articles, videos and updates below helpful!




Generator updates

If there are Yeoman resources you would like to suggest for the next issue, please feel free to suggest them to @yeoman on Twitter or +Yeoman on Plus and we’ll be sure to check em’ out.

The Landscape Of Front-end Development Automation (Slides)

By Addy Osmani at

Writing a modern web app these days can sometimes feel like a tedious process; frameworks, boilerplates, abstractions, dependency management, build processes..the list of requirements for a front-end workflow appears to grow each year. What if however, you could automate a lot of this?

In the slides from my FOWA keynote, I walk through a landscape of tools to keep you productive on the front-end. Learn how to iterate faster, get real-time feedback, avoid bugs through tools and incorporate these into a functional developer workflow.

Some key points

  • Desktop tools for workflow automation can save time on simple projects.
  • Command-line automation tools are better for complex projects where you need more flexibility.
  • Use an editor that gives you real-time feedback during development to maximize productivity.
  • New authoring features in the Canary DevTools make editing in the browser pleasant
  • Augment your system workflow with productivity tools like Alfred
  • Use cross-device testing, network throttling and visual regression testing for a better mobile workflow.

Choose tools you’ll use

Front-end tooling has come a long way in the last few years. That said, it’s hard not to think that developing for the web today is awesome without thinking that it’s now more complex.

The key to staying effective is choosing tools you’ll actually use. Spend time analyzing your personal workflow and select those tools that will help you become more effective.

If you have any questions, comments or tooling suggestions you’d like to share, please feel free to let us know in the comments below!

Web Audio live audio input - now on Android!

By Chris Wilson at

One of the consistent questions I keep fielding over on Stack Overflow is “why doesn’t audio input work?” – to which the answer kept turning out to be “because you’re testing on Android, and we don’t have it hooked up yet.”

Well, I’m happy to announce that the Beta version of Chrome for Android (v31.0.1650+) has support for audio input in Web Audio! Check out my Audio Recorder demo on an Android device with Chrome Beta. We’re still testing it out with all devices; I’ve personally tested on a Nexus 4, a Galaxy S4 and a Nexus 7, but if you run into issues on other devices, please file them.

When I saw the support get checked in, I flipped back through some of the audio input demos I’ve done in the past to find a good demo to show it off. I quickly found that my Audio Recorder demo functioned well on mobile, but it wasn’t really designed for a good user experience on mobile devices.

So, I quickly used the skills I’m teaching in the upcoming Mobile Web Development course to whip it into shape – viewport, media queries, and flexbox to the rescue! Be sure to preregister for the course if you’re interested in taking your web development skills to the mobile world, too!

Flexbox layout isn't slow

By Paul Irish at

TL;DR: Old flexbox (display: box) is 2.3x slower than new flexbox (display: flex).

A bit ago, Wilson Page wrote a great article for Smashing Magazine digging into how they brought the Financial Times webapp to life. In the article, Wilson notes:

As the app began to grow, we found performance was getting worse and worse.

We spent a good few hours in Chrome Developers Tools’ timeline and found the culprit: Shock, horror! — it was our new best friend, flexbox. The timeline showed that some layouts were taking close to 100 milliseconds; reworking our layouts without flexbox reduced this to 10 milliseconds!

Wilson's comments were about the original (legacy) flexbox that used display: box;. Unfortunately they never got a chance to answer if the newer flexbox (display: flex;) was faster, but over on CSS Tricks, Chris Coyier opened that question.

We asked Ojan Vafai, who wrote much of the implementation in WebKit & Blink, about the newer flexbox model and implementation.

The new flexbox code has a lot fewer multi-pass layout codepaths. You can still hit multi-pass codepaths pretty easily though (e.g. flex-align: stretch is often 2-pass). In general, it should be much faster in the common case, but you can construct a case where it's equally as slow.

That said, if you can get away with it, regular block layout (non-float), will usually be as fast or faster than new flexbox since it's always single-pass. But new flexbox should be faster than using tables or writing custom JS-base layout code.

To see the difference in numbers, I made a head-to-head comparison of old v new syntax.

Old v New Flexbox Benchmark

  • old flexbox vs new flexbox (with fallback)
  • 500 elements per page
  • evaluating page load cost to lay out the elements
  • averaged across 3 runs each
  • measured on desktop. (mobile will be ~10x slower)

Old flexbox: layout costs of ~43.5ms

New flexbox: layout costs of ~18.2ms

Summary: Old is 2.3x slower than new.

What should you do?

When using flexbox, always author for the new stuff: the IE10 tweener syntax and the new updated flexbox that’s in Chrome 21+, Safari 7+, Firefox 22+, Opera (& Opera Mobile) 12.1+, IE 11+, and Blackberry 10+. In many cases you can do a fallback to the legacy flexbox to pick up some older mobile browsers.


  • I also ran the benchmark using display:table-cell and it hit 30ms, right between the two flexbox implementations.
  • The benchmarks above only represent the Blink & WebKit side of things. Due to the time of implementation, flexbox is nearly identical across Safari, Chrome & Android.

Remember: Tools, not rules

What’s more important is optimizing what matters. Always use the timeline to identify your bottlenecks before spending time optimizing one sort of operation.

In fact, we've connected with Wilson and the Financial Times Labs team and, as a result, improved the Chrome DevTools coverage of layout performance tooling. We'll soon be adding the ability to view the relayout boundary of an element, and Layout events in the timeline are loaded with details of the scope, root, and cost of each layout:

DevTools answers: What font is that?

By Paul Irish at

Chrome DevTools can now tell you exactly what typeface is being used to render text.

Font stacks are a funny thing, more of a suggestion than a demand. Because the family you suggest may not be present, you're letting each user's browser handle the fall-through case, pulling something that will work and using that.

font-family: Baskerville, "Baskerville Old Face", "Hoefler Text", Garamond, "Times New Roman", serif;

As a developer, you want to know what font is actually being used. Here's how it works:

Under Computed Styles, you'll now see a summary of the typeface(s) used for that element. There's a few things to note here:

  • DevTools is reporting the actual typeface used by Chrome's text rendering layer. No more guessing which font serif or sans-serif is actually resolving to.
  • Is my webfont working? Sometimes it's hard to tell if you're seeing the webfont or the fallback system font. Now you can verify that the webfont is being applied. In the above example, we're pulling down Lobster as a webfont for the ::first-line style.
  • Fall-through fonts in your stack are easy to spot. Above, we had a typo spelling Merriweather and so it wasn't used, falling through to Lobster.
  • Is that Arial or Helvetica? Ask a designer or… ask DevTools. ;)
  • Works great with Google Webfonts, Typekit, local fonts, @font-face typefaces, unicode glyphs, and all other interesting font sources.

Enjoy and please leave a comment if you have any feedback.

dialog element: Modals made easy

By Eiji Kitamura at

Chrome Canary has landed support for the dialog element behind a flag. The dialog element can be used for popups in a web page.

  • show(): Open dialog.
  • close(): Close dialog. Takes an optional argument which if present dialog.returnValue is set to.
  • showModal(): Open modal dialog.
  • ::backdrop: Pseudo-element to style background behind a modal dialog.
  • close event: Fired when a dialog is closed.

Update on Dec 16th 2013

The dialog element now supports:

  • cancel event: Fired when the Escape key is pressed on a modal dialog. This event can be canceled using event.preventDefault().
  • autofocus attribute: The first form control in a modal dialog that has the autofocus attribute, if any, will be focused when the dialog is shown. If there is no such element, the first focusable element is focused.
  • form[method="dialog"]: Only valid inside a dialog. Upon form submission, it closes the dialog and sets dialog.returnValue to the value of the submit button that was used.

Check out details with a live demo and polyfill.

Turn it on by enabling "Enable experimental Web Platform features" in about://flags.

Alpha transparency in Chrome video

By Sam Dutton at

Alpha transparency in Chrome video

Chrome Canary now supports video alpha transparency in WebM.

In other words, Chrome takes the alpha channel into account when playing 'green screen' videos encoded to WebM with an alpha channel. This means you can play videos with transparent backgrounds: over web pages, images or even other videos.

There's a demo at This only works in Chrome Canary at this point, and you'll need to enable VP8 alpha transparency from the chrome://flags page. Somewhat surreal, and a bit rough around the edges (literally) but you get the idea!

Here's a screencast:

How to make alpha videos

The method we describe uses the open source tools Blender and ffmpeg:

  1. Film your subject in front of a single colour background such as a bright green curtain.
  2. Process the video to build an array of PNG still images with transparency data.
  3. Encode to a video format (in this case, WebM).

There are also proprietary tools to do the same job, such as Adobe After Effects, which you may find simpler.

1. Make a green screen video

First of all, you need to film your subject in a way that everything in the background can be 'removed' (made transparent) by subsequent processing.

The easiest way to do this is to film in front of a single colour background, such as a screen or curtain. Green or blue are the colors most often used, mostly because of their difference from skin tones.

There are several guides online to filming green screen video (also known as chroma key) and lots of places to buy green and blue screen backdrops. Alternatively, you can paint a background with chroma key paint.

The Great Gatsby VFX reel shows just how much can be accomplished with green screen.

Some tips for filming:

  • Ensure your subject does not have clothes or objects that are the same color as the backdrop, otherwise these will show up as 'holes' in the final video. Even small logos or jewelry can be problematic.
  • Use consistent, even lighting, and avoid shadows: the aim is to have the smallest possible range of colors in the background that will subsequently need to be made transparent.
  • Using multiple diffused lights helps to avoid shadows and background color variations.
  • Avoid shiny backgrounds: matte surfaces diffuse light better.

2. Create raw alpha video from green screen video

The following steps describe one way to create a raw alpha video from green screen videos:

  1. Once you've shot a green screen video, you can use an open source tool like Blender to convert the video to an array of PNG files with alpha data. Use Blender's color keying to remove the green screen and make it transparent. (Note that PNG is not compulsory: any format that preserves alpha channel data is fine.)
  2. Convert the array of PNG files to a raw YUVA video using an open source tool such as ffmpeg:

    ffmpeg -i image%04d.png -pix_fmt yuva420p video.raw

    Alternatively encode the files directly to WebM, using an ffmpeg command like this:

    ffmpeg -i image%04d.png output.webm

If you want to add audio, you can use ffmpeg to mux that in with a command like this:

ffmpeg -i image%04d.png -i audio.wav output.webm

3. Encode alpha video to WebM

Raw alpha videos can be encoded to WebM in two ways.

  1. With ffmpeg: we added support to ffmpeg to encode WebM alpha videos.

    Use ffmpeg with an input video including alpha data, set the output format to WebM, and encoding will automatically be done in the correct format as per the spec. (Note: you'll currently need to make sure to get the latest version of ffmpeg from the git tree for this to work.)

    Sample command:

    ffmpeg -i myAlphaVideo.webm output.webm

  2. Using webm-tools:

    git clone

    webm-tools is a set of simple open source tools related to WebM, maintained by the WebM Project authors, including a tool for creating WebM videos with alpha transparency.

    Run the binary with --help to see list of options supported by alpha_encoder.

4. Playback in Chrome

To play the encoded WebM file in Chrome, simply set the file as the source of a video element.

As of now, VP8 alpha playback is behind a flag, so you have to either enable it in about:flags or set the command line flag --enable-vp8-alpha-playback when you start Chrome. When the flag is enabled, alpha playback also works with MediaSource.

How did they do it?

We talked to Google engineer Vignesh Venkatasubramanian about his work on the project. He summarised the key challenges involved:

  • The VP8 bitstream had no support for alpha channel. So we had to incorporate alpha without breaking the VP8 bitstream and without breaking existing players.
  • Chrome's renderer was not capable of rendering videos with alpha.
  • Chrome has multiple rendering paths for multiple hardware/GPU devices. Every rendering path had to be changed to support rendering of alpha videos.

We can think of lots of interesting use cases for video alpha transparency: games, interactive videos, collaborative story telling (add your own video to a background video/image), videos with alternative characters or plots, web apps that use overlay video components...

Happy film making! Let us know if you build something amazing with alpha transparency.