HTML5 Rocks

HTML5 Rocks

Yo Polymer – A Whirlwind Tour Of Web Component Tooling

By Addy Osmani at

Web Components are going to change everything you think you know about building for the web. For the first time, the web will have low level APIs allowing us to not only create our own HTML tags but also encapsulate logic and CSS. No more global stylesheet soup or boilerplate code! It’s a brave new world where everything is an element.

In my talk from DotJS, I walk through what Web Components have to offer and how to build them using modern tooling. I’ll show you Yeoman, a workflow of tools to streamline creating web-apps using Polymer, a library of polyfills and sugar for developing apps using Web Components in modern browsers today.

Create custom elements & install elements created by others

In this talk you will learn:

  • About the four different specs composing Web Components: Custom Elements, Templates, Shadow DOM and HTML imports.
  • How to define your own custom elements and install elements created by others using Bower
  • Spend less time writing JavaScript and more time constructing pages
  • Use modern front-end tooling (Yeoman) to scaffold an application using Polymer with generator-polymer
  • How Polymer super changes creating web components.

For example, to install Polymer's Web Component polyfills and the library itself, you can run this one liner:

bower install --save Polymer/platform Polymer/polymer

This adds a bower_components folder and adds the above packages. --save adds them to your app's bower.json file.

Later, if you wanted to install Polymer's accordion element you could run:

bower install --save Polymer/polymer-ui-accordion

and then import it into your application:

<link rel="import" href="bower_components/polymer-ui-accordion/polymer-ui-accordion.html">

To save time, scaffolding out a new Polymer app with all the dependencies you need, boilerplate code and tooling for optimizing your app can be done with Yeoman with this other one liner:

yo polymer

Bonus walkthrough

I also recorded a 30 minute bonus walkthrough of the Polymer Jukebox app I show in the talk.

Covered in the bonus video:

  • What the “everything is an element” mantra means
  • How to use Bower to install Polymer’s Platform polyfills and elements
  • Scaffolding our Jukebox app with the Yeoman generator and sub-generators
  • Understanding the platform features scaffolded out via boilerplate
  • How I functionally ported over an Angular app over to Polymer.

We also make use of Yeoman sub-generators for scaffolding our new Polymer elements. e.g to create the boilerplate for an element foo we run:

yo polymer:element foo

which will prompt us for whether we would like the element automatically imported, whether a constructor is required and for a few other preferences.

The latest sources for the app shown in both talks are now up on GitHub. I’ve refactored it a little further to be more organized and a little more easy to read.

Preview of the app:

Further reading

In summary, Polymer is a JavaScript library that enables Web Components now in modern web browsers as we wait for them to be implemented natively. Modern tooling can help improve your workflow using them and you might enjoy trying out Yeoman and Bower when developing your own tags.

A few other articles that are worth checking out on the subject:

Web apps that talk - Introduction to the Speech Synthesis API

By Eric Bidelman at

The Web Speech API adds voice recognition (speech to text) and speech synthesis (text to speech) to JavaScript. The post briefly covers the latter, as the API recently landed in Chrome 33 (mobile and desktop). If you're interested in speech recognition, Glen Shires had a great writeup a while back on the voice recognition feature, "Voice Driven Web Apps: Introduction to the Web Speech API".


The most basic use of the synthesis API is to pass the speechSynthesis.speak() and utterance:

var msg = new SpeechSynthesisUtterance('Hello World');

Try it!

However, you can also alter parameters to effect the volume, speech rate, pitch, voice, and language:

var msg = new SpeechSynthesisUtterance();
var voices = window.speechSynthesis.getVoices();
msg.voice = voices[10]; // Note: some voices don't support altering params
msg.voiceURI = 'native';
msg.volume = 1; // 0 to 1
msg.rate = 1; // 0.1 to 10
msg.pitch = 2; //0 to 2
msg.text = 'Hello World';
msg.lang = 'en-US';

msg.onend = function(e) {
  console.log('Finished in ' + event.elapsedTime + ' seconds.');


Setting a voice

The API also allows you to get a list of voice the engine supports:

speechSynthesis.getVoices().forEach(function(voice) {
  console.log(, voice.default ? '(default)' :'');

Then set a different voice, by setting .voice on the utterance object:

var msg = new SpeechSynthesisUtterance('I see dead people!');
msg.voice = speechSynthesis.getVoices().filter(function(voice) { return == 'Whisper'; })[0];


In my Google I/O 2013 talk, "More Awesome Web: features you've always wanted" (, I showed a Google Now/Siri-like demo of using the Web Speech API's SpeechRecognition service with the Google Translate API to auto-translate microphone input into another language:


Unfortunately, it used an undocumented (and unofficial API) to perform the speech synthesis. Well now we have the full Web Speech API to speak back the translation! I've updated the demo to use the synthesis API.

Browser Support

Chrome 33 has full support for the Web Speech API, while Safari for iOS7 has partial support.

Feature detection

Since browsers may support each portion of the Web Speech API separately (e.g. the case with Chromium), you may want to feature detect each feature separately:

if ('speechSynthesis' in window) {
 // Synthesis support. Make your web apps talk!

if ('SpeechRecognition' in window) {
  // Speech recognition support. Talk to your apps!

Chrome Dev Summit: Platforms Summary

By Seth Ladd at


Dart compiles to JavaScript, sometimes generating code that's faster than hand-written JavaScript. Watch Dart co-founder Kasper Lund explain how the dart2js compiler performs local and global optimizations to emit fast and semantically correct JavaScript code. With tree shaking, type inference, and minification, Dart can help you optimize your web app.

Slides: Dart

Chrome Apps

Chrome Apps provide the power and user experience of native apps with the development simplicity and security of the Web, and integrate seamlessly with Google services like Drive. Chrome Apps run on Mac, Windows, Linux, and ChromeOS, as well as iOS and Android, right out of the box.

Slides: Chrome Apps


Portable Native Client is a technology that enables portable, secure execution of native applications in Chrome. This extension of the Native Client project brings the performance and low-level control of native code to modern web browsers without sacrificing the security and portability of the web.

PNaCl helps developers produce a platform-independent form of their native application and run it in the browser without any installs. Behind the scenes, Chrome translates PNaCl applications to machine code at runtime to achieve near-native performance. On other browsers, PNaCl applications can use Emscripten and pepper.js to maintain functionality with a minimal performance hit.

Slides: PNACL

Chrome Dev Summit: Open Web Platform Summary

By Sam Dutton at


by Greg Simon & Eric Seidel

Blink is Chrome's open-source rendering engine. The Blink team is evolving the web and addressing the issues encountered by developers.

There have been a number of behind-the-scenes improvements started since our April launch.

First thing we did was to delete half our source, which we didn't necessarily need. We're still not done! And we're not doing this blind: code removal is based on anonymously reported aggregate statistics from Chrome users who opt in to reporting.

We publish a new developer API every six weeks: the same as Chrome's shipping schedule is.

One big change we made when we forked from Blink was to add an intents system: every time before we're going to change the web platform, we send a public announcement to Blink dev announcing our intent to add or remove a feature. Then we go off, and we code it! And then the very next day after the feature is checked in, it's already there shipping in our Canary builds. This feature is off by default, but you can turn it on using about:flags.

Then, on our public mailing list we announce an intent to ship.

At you can see the features we've worked on, the features we've shipped, and those we're planning to deprecate. You can also check the Chromium Releases blog, which has links to bugs and to our tracker dashboard.

Another big change is that we're removing WebKit prefixes. The intent is not to use Blink prefixes, but to have run-time flags (and not just compile-time flags).

Android WebView has been a big challenge – but HTML5Test shows that things are getting better. We're much closer to desktop in terms of having one set of web platform APIs everywhere (Web Audio is a great example of this!)

But how does the sausage machine work? Every single change we make to Blink is immediately run through over 30,000 tests, not to mention all the Chromium tests that run additionally later. We use 24 hour sheriffing, with thousands of bots, thousands of benchmarks, and systems that throw millions of broken web pages at our engine to make sure it doesn't fall over. We know that mobile is significantly slower, and this is something we're working hard to improve.

So what's new?

  • Web Components: check out Eric Bidelman's talk!
  • Web Animations: complex, synchronized, high performance animations that uses the GPU wherever possible
  • Partial Layout: only compute what you need!
  • CSS Grid
  • Responsive images: srcset or srcN or ?
  • Faster text autosizing, and consistent sub-pixel fonts
  • Skia, the graphic system used by Blink, is moving from GDI to DirectWrite on Windows

We want to know what you have to say!

If you feel C++ in your blood and want to write C++ with us, all of our code is open. You don't have to tell anybody or evangelize to us. You can just simply post a patch or file a bug!

Slides: Blink


by Parisa Tabriz

More people are connected to the web today than ever before – and from more places.

We're connected with our laptops, phones and tablets, and probably soon enough with personal devices and accessories. We access the internet from untrusted and sometimes even hostile networks. With so much of our lives moving online, it's imperative we take steps to protect our data and our users' data.

Above all, as developers we need to understand the necessity and practicality of SSL.

What's SSL? It stands for Secure Sockets Layer, and it's a cryptographic protocol designed to provide communication security over the internet. It guarantees privacy, via encryption and integrity, to prevent snooping or tampering with your internet connection. SSL has it's flaws, but it's the leading way – and really the only way – to ensure any kind of data communication security on the internet.

According to SSL Pulse, a year ago we had about just under 15% of SSL adoption; we're now over 50% adoption.

Two acronyms:

  • TLS: for most intents and purposes the same as SSL. To be precise, SSL 3.1 was renamed to TLS, and TLS is the IETF Standard name. But they're interchangeable!

  • HTTPS: this is HTTP over SSL, just the layering of the security capabilities of SSL and standard HTTP. First the client–server handshake, using public/private key cryptography to create a shared key – which is used by the second part of the SSL protocol to encrypt communication.

Networking on the internet may feel safe, immediate and fast. It feels like we're talking directly to the website. But in reality, it's not a direct connection. Our communications go via a wifi router, an ISP, and potentially other intermediary proxies between your device and the website. Without HTTPS, all our communications is in plain text.

Trouble is, users rarely type in a full URL specifying HTTPS – or they click a link using HTTP. Worse, it's possible to mount a (wo)man-in-the-middle attack and replace HTTPS with HTTP. A tool called SSLstrip introduced in 2009 does just that. Firesheep, from 2010, just listened to opened wifi networks for cookies being sent in the clear: that meant you could listen in on chat, or log in to someone's Facebook account.

But SSL is (relatively) cheap, fast and easy to deploy (check out and Ilya Grigorik's book High Performance Browser Networking). For non-commercial use, you can even get free certificates from! Public Key Pinning is designed to give website operators a means to restrict which certificate authorities can actually issue certificates for their sites.

"In January this year (2010), Gmail switched to using HTTPS for everything by default. .. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL accounts for < 1% of the CPU load, < 10 KB of memory per connection, and < 2% of network overhead…

If you stop reading now you only need to remember one thing: SSL is not computationally expensive any more.”

Overclocking SSL, Adam Langley (Google)

Lastly, a couple of bugs we see most commonly:

  • Mixed content: sites that use HTTP as well as HTTPS. Your user will get annoyed because they have to click a permission button to load content. (Chrome and Firefox actually bar mixed content from iframes.) Make sure that all of your resources on an HTTPS page are loaded by HTTPS, by using relative or scheme-relative URLs for example <style src="//">
  • Insecure cookies: sent in the clear via an HTTP connection. Avoid this by setting the secure attribute on cookie headers. You can also use a new "Strict Transport Security" header to require SSL Transport Security (HSTS).


  • If you care about the privacy and integrity of your users' data, you need to be using SSL. It's faster, easier, and cheaper than ever.
  • Avoid common implementation gotchas, like mixed content bugs or not setting the right HTTP header bits.
  • Use relative or scheme relative URLs.
  • Check out some of the new cool stuff, like HSTS and cert pinning

Slides: Got SSL?

Media APIs for the multi-device Web

by Sam Dutton & Jan Linden

Along with a proliferation of new devices and platforms on the web, we're seeing huge growth in audio, video and realtime communication. Online media is transforming the way we consume media of all kinds.

A UK government study found that 53% of adults 'media multi-task' while watching TV: using mobile devices to share and consume media. In many countries TV viewing is down and online viewing is up. In China, for example, in 2012 only 30% of households in Beijing watched TV, down from 70% in 2009. According to the W3C Highlights 2013, 'In the past year video-watching on mobile devices has doubled. This year in the US, the average time spent with digital media per day will surpass TV viewing. Viewing is no longer a passive act. In the US, 87% of entertainment consumers say they use at least one second-screen device while watching television.' According to Cisco 'video ... will be in the range of 80 to 90 percent of global consumer traffic by 2017'. That equates to nearly a million minutes of video every second.

So what do we have for web developers? An ecosystem of media APIs for the open Web: standardized, interoperable technologies that work across multiple platforms.


  • WebRTC provides realtime communication in the browser, and is now widely supported on mobile and desktop. In total there are already over 1.2 billion WebRTC endpoints.
  • Web Audio provides sophisticated tools for audio synthesis and processing.
  • Web MIDI, integrated with Web Audio, allows interaction with MIDI devices.
  • The audio and video elements are now supported on more than 85% of mobile and desktop browsers.
  • Media Source Extensions can be used for adaptive streaming and time shifting.
  • EME enables playback of protected content.
  • Transcripts, captions and the track element enable subtitles, captions, timed metadata, deep linking and deep search.

Slides: Media APIs for the multi-device Web

Chrome Dev Summit: Performance Summary

By Paul Lewis at

#perfmatters: Tooling techniques for the performance ninja

Knowing your way around your development tools is key to becoming a performance Grand Master. Colt stepped through the three pillars of performance: network, compute and render, providing a tour of the key problem in each area and the tools available for finding and eradicating them.


  • You can now profile Chrome on Android with the DevTools you know and love from desktop.
  • The iteration loop for performance work is: gather data, achieve insight, take action.
  • Prioritize assets that are on the critical rendering path for your pages.
  • Avoid painting; it’s super expensive.
  • Avoid memory churn and executing code during critical times in your app.

#perfmatters: Optimizing network performance

Network and latency typically accounts for 70% of a site’s total page load time. That’s a large percentage, but it also means that any improvements you make there will reap huge benefits for your users. In this talk Ilya stepped through recent changes in Chrome that will improve loading time, as well as a few changes you can make in your environment to help keep network load to an absolute minimum.


  • Chrome M27 has a new and improved resource scheduler.
  • Chrome M28 has made SPDY sites (even) faster.
  • Chrome’s simple cache has received an overhaul.
  • SPDY / HTTP/2.0 offer huge transfer speed improvements. There are mature SPDY modules available for nginx, Apache and Jetty (to name just three).
  • QUIC is a new and experimental protocol built on top of UDP; it’s early days but however it works out users will win.

#perfmatters: 60fps layout and rendering

Hitting 60fps in your projects directly correlates to user engagement and is crucial to its success. In this talk Nat and Tom talked about Chrome’s rendering pipeline, some common causes of dropped frames and how to avoid them.


  • A frame is 16ms long. It contains JavaScript, style calculations, painting and compositing.
  • Painting is extremely expensive. A Paint Storm is where you unnecessarily repeat expensive paint work.
  • Layers are used to cache painted elements.
  • Input handlers (touch and mousewheel listeners) can kill responsiveness; avoid them if you can. Where you can’t keep them to a minimum.

#perfmatters: Instant mobile web apps

The Critical Rendering Path refers to anything (JavaScript, HTML, CSS, images) that the browser requires before it is able to begin painting the page. Prioritizing the delivery of assets on the critical rendering path is a must, particularly for users on network-constrained devices such as smartphones on cellular networks. Bryan talked through how the team at Google went through the process of identifying and prioritizing the assets for the PageSpeed Insights website, taking it from a 20 second load time to just over 1 second!


  • Eliminate render-blocking JavaScript and CSS.
  • Prioritize visible content.
  • Load scripts asynchronously.
  • Render the initial view server-side as HTML and augment with JavaScript.
  • Minimize render-blocking CSS; deliver only the styles needed to display the initial viewport, then deliver the rest.
  • Large data URIs inlined in render-blocking CSS are harmful for render performance; they are blocking resources where image URLs are non-blocking.

Chrome Dev Summit: Polymer declarative, encapsulated, reusable components

By Eric Bidelman at

Polymer is one gateway into the amazing future of Web Components. We want to make it easy to consume and build custom elements. For the past year, the team has been working hard on a set of polyfills for the evolving specifications. On top of that, we've created a convenient sugaring library to make building web components easier. Lastly, we're crafting a set of UI and utility elements to reuse in your apps. At the 2013 Chrome Dev Summit, I dove into the different parts of Polymer and the philosophy behind our "Everything is an element" mantra.


"Everything is an element" (from <select> to custom elements)


Building web pages in the 90s was limiting, but powerful. We only had a few elements at our disposal. The powerful part?...everything was declarative. It was remarkably simple to create a page, add form controls, and create an "app" without writing gobs of JavaScript.

Take the humble <select> element. There is a ton of functionality built into the element, simply by declaring it:

  • Customizable through HTML attributes
  • Renders children (e.g. <option>) with a default UI, but configurable via attributes.
  • Useful in other contexts like <form>
  • Has a DOM API: properties and methods
  • Fires events when interesting things happen

Web Components provide the tools to get back to this heyday of web development. One where we can create new elements, reminiscent of <select>, but designed for the use cases of 2014. For example, if AJAX was invented today it would probably be an HTML tag (example):

<polymer-ajax url="" 

Or responsive elements that data-bind to a queryMatches attribute:

<polymer-media-query query="max-width:640px" queryMatches="{{isPhone}}"></…

This is exactly the approach we're taking in Polymer. Instead of building monolithic JavaScript-based web apps, let's create reusable elements. Over time, an entire app grows out of composing smaller elements together. Heck, and entire app could be an element:


Building web components with Polymer's special sauce


Polymer contains a number of conveniences for building web component based applications:

  • Declarative element registration: <polymer-element>
  • Declarative inheritance: <polymer-element extends="...">
  • Declarative two-way data-binding: <input id="input" value="{{foo}}">
  • Declarative event handlers: <button on-click="{{handleClick}}">
  • Published properties: = 5 <-> <x-foo bar="5">
  • Property observeration: barChanged: function() {...}
  • PointerEvents / PointerGestures by default

Moral of the story is that writing Polymer elements is all about being declarative. The less code you have to write, the better ;)

Web Components: the future of web development


I would be remissed if I didn't give a shout out to the standards behind Web Components. After all, Polymer is based on these evolving foundational APIs.

We're on the cusp of a very exciting time in web development. Unlike other new features being added to the web platform, the APIs that make up Web Components are not shiny or user-facing. They're purely for developer productivity. Each of the four main APIs is useful by itself, but together magical things happen!

  1. Shadow DOM - style and DOM encapsulation
  2. Custom Elements - define new HTML elements. Give them an API with properties and methods.
  3. HTML Imports is the distribution model for a package of CSS, JS, and HTML.
  4. Templates - proper DOM templating for defining inert chunks of markup to be stamped out later

If you want to learn more about the fundamentals of the APIs, check out

Chrome Dev Summit: Mobile Summary

By Paul Kinlan at

The Chrome Dev Summit finished a couple of weeks ago, and here's the first in a series of reports from the event. There was a strong emphasis on Mobile and Cross-device development, so we'll kick off with that!

Best UX patterns for mobile web apps by Paul Kinlan

After an analysis of the mobile-friendliness of the top 1000 sites we found some problem areas: 53% still only provide a desktop-only experience, 82% of sites have issues with interactivity on a mobile device and 64% of sites have text that users will have issues reading.

Quick hits to dramatically improve your mobile web experience:

  • Always define a viewport
  • Fit content inside the viewport
  • Keep font sized at a readable level
  • Limit use of Web Fonts
  • Size and space out tap targets appropriately
  • Use the semantic types for input elements

PageSpeed Insights just launched a UX analysis for determining how mobile-friendly your site is. It will help you find common problems with your sites mobile UX. Try it out!

Slides: Best UX patterns for mobile web apps

Multi-device Accessibility by Alice Boxhall

Users will be accessing your sites and services from a multitude of devices with a wide range of different accessibility requirements. By using the correct semantic elements and correct ARIA roles you help give the browser and assistive technology a much improved understanding of your page.

Slides: Multi-device Accessibility

Key ways to understand and address a11y issues

  • Ensure you have a good keyboard-only user experience
  • Express the semantics of your interface with correct element choice and ARIA
  • Use ChromeVox on desktop and TalkBack on Android to test.
  • Try the Accessibility Developer Tools Chrome extension
  • A more diverse audience is getting online, which further amplifies the need of making your sites accessible

Build Mobile Apps using the Chrome WebView by Matt Guant

We all know the problems that developers have had in the past building for WebView: Limited HTML5 features, no debugging tools, no build tools. With the introduction of a Chromium powered WebView in Android 4.4 (KitKat) developers now have a huge range of new tools at their disposal to build great native apps using the WebView.

The WebView supports full remote debugging with the same tools you use for Chrome. You can also take your trusted web development workflow with Grunt and integrate that into your native stack tooling via Gradle. Further merging worlds, there's a clever trick to use the Chrome DevTools to test your native code from Javascript.

Slides: Build Mobile Apps using the Chrome WebView

Effective WebView development takeaways

  • It’s not the new features that are important, its the tooling that you can now use to speed up your workflow
  • Don’t try to emulate the native UI. But make sure to remove some of the tells that it is Web Content.
  • Use native implementations of features when appropriate. i.e, use the DownloadManager rather than XHR for large files.

Optimizing your Workflow for a Cross-device world by Matt Gaunt

If we have to develop for Desktop, Mobile, Tablet, wearables and other form factors, how can you optimise your workflow to make your life less stressful? There's a solid multi-device approach for quick iteration with LiveReload, Grunt, Yeoman, and the newly-unveiled Mini Mobile Device Lab. Lastly, if you don't have the physical hardware you want to test, some providers make it available through the cloud.

Slides: Optimizing your Workflow for a Cross-device world

Key points

  • The number of devices that we are going to have to cater for is only going to increase
  • Getting your workflow with the right with Grunt and Yeoman
  • Simplify cross browser and cross device testing with Mini Mobile Device Lab
  • Be smart with your emulation choice using Chrome DevTools Emulation, Stock Emulators, Cloud Based Emulators like Saucelabs, Browserstack and Device Anywhere and third party emulator Genymotion
  • Mobile testing means more than just testing on your wifi connection, use a proxy to simulate slower network speeds

Network connectivity: optional by Jake Archibald

We learnt many things from this talk: Jake doesn’t wear shoes when presenting; Business Kinlan has a new book coming out soon; Offline is being taken seriously by browser vendors and you will soon have the tools in your hands that help you build great experiences that work well when you are offline.

ServiceWorker will give us the flexibility that we need to build compelling offline first experiences with ease and not suffer the pains inflicted by AppCache. You can even experiment with the API using a Polyfill.

Slides: Network connectivity: optional

ServiceWorker to the rescue

  • In the next generation of progressive enhancement, we treat the network as a potential enhancement
  • ServiceWorker gives you full, scriptable, debuggable control over network requests
  • If you have an offline experience, don’t wait for the network to fail before you show it, as this can take ages

The Yeoman Monthly Digest #2

By Addy Osmani at

Allo’ Allo’ and Happy Holidays! Welcome to the second issue of the Yeoman monthly digest – our regular round-up of articles, tips, generators and videos to help you stay on top of what’s new with your favourite man-in-a-hat. We hope you find the updates below helpful!

Grunt pro-tips

It’s tempting to try every Grunt plug-in out there – there’s a bajillion! It’s also easy to get carried away. Before you know it, you’re staring at your terminal far longer than you used to be, waiting for your tasks to complete. It can be frustrating during your build, but super frustrating during your watch.

Fortunately, the community has been working towards speeding up your development cycle even more.

  • Reduce your Grunt compilation time with this custom task trick
  • Use grunt-newer to only run Grunt tasks on files that changed
  • Run tasks concurrently with grunt-concurrent so multiple tasks can be run simultaneously

Some other tips:




yo 1.0.7-pre is now available for testing on npm and we look forward to talking more about our roadmap for 2014 in the coming weeks. In the mean time, there's lots of juicy new updates to both our official generators and those you've been authoring below.

Official generator updates

  • Backbone 0.2.2 released with RequireJS + CoffeeScript support & --appPath option
  • AngularJS 0.7.1 with support for Angular 1.2.6 and grunt-bower-install
  • Ember.js 0.8.0 released. Scaffolding updated to Ember 1.2 syntax, improved CoffeeScript support, templating, REST routes
  • WebApp 0.4.5 and 0.4.6 including improved HTMLMin, bower install fixes and grunt-bower-install support for CSS dependencies
  • Polymer generator 0.0.8 with Web Component concatenization and other updates
  • Chrome app 0.2.5 - proper support for livereload, rewritten app generator, build task for packaging, new permissions code and more.

Other official generators including jQuery, Gruntfile, CommonJS, NodeJS and Mocha have also been updated.

Featured Community generators

StackOverflow answers

yo newyear

That's a wrap! If there are Yeoman resources you would like to suggest for the next issue, please feel free to suggest them to @yeoman on Twitter or +Yeoman on Plus and we’ll be sure to check em’ out. Happy Holidays and have a fantastic new year!

With special thanks to Stephen Sawchuk, Sindre Sorhus and Pascal Hartig for their review of this issue

DevTools Digest December 2013

By Umar Hansa at

A number of updated features have made it into the Chrome DevTools recently, some small, some big. We'll start out with the Element panel's updates and move on to talk about Console, Timeline, and more.

Disabled style rules copy as commented out

Copying entire CSS rules in the Styles pane will now include styles you toggled off, they will exist in your clipboard as commented out. []

Copy as CSS path

‘Copy as CSS path’ is now available as a menu item for DOM nodes in the Elements panel (similar to the Copy XPath menu item).

image alt text

Generation of CSS selectors do not have to be limited to your stylesheets/JavaScript, they can also be alternatives for locator strategies in WebDriver tests. []

View Shadow DOM element styles

Child elements of a shadow root can now have their styles inspected. []

Console's copy() works for objects

The copy() method from the Command Line API now works for objects. Go ahead and try copy({foo:'bar'}) into the Console panel and notice how a stringified & formatted version of the object is now in your clipboard. []

Regex filter for console

Filter console messages using regular expressions in the Console panel. []

Easily remove event listeners

Try getEventListeners(document).mousewheel[0]; in the Console panel to retrieve the first mousewheel event listener on the document. Carrying on from this, try $_.remove(); to remove that event listener ($_ = value of the most recently evaluated expression). []

Removal of CSS warnings

Those "Invalid CSS property value"-style warnings you might have seen are now removed. There are ongoing efforts into making the implementation more robust against real world CSS including browser hacks. []

Timeline operations summarized in pie chart

The Timeline panel now contains a pie chart in the Details pane which visually shows the source of your rendering costs - this helps you identify your bottlenecks at a glance.

You’ll find that much of the information which used to be displayed in popovers has now been promoted to its own pane. To view, start a Timeline recording and select a frame, take note of the new Details pane which contains a pie chart. When in Frames view, you’ll get interesting stats like average FPS (1000ms/frame duration) for the selected frame(s). []

Image resize event details

Image resize and decode events in the Timeline panel now contain a link to the DOM node in the Elements panel.

image alt text

The Image URL link takes you to the corresponding resource in the Resources Panel. []

GPU Frames

Frames occurring on the GPU are now shown at the top, above frames on the main thread. []

Break on popstate listeners

'popstate' is now available as an event listener breakpoint in the Sources panel sidebar. []

Rendering settings available in the drawer

Opening the drawer now presents a number of panes, one of which is the Rendering panel, use it to show paint rectangles, FPS meter etc. This is enabled by default at Settings > "Show 'Rendering' view in console drawer"

Copy image as data URL

Image assets in the Resources panel can now have their contents copied as a data URI (data:image/png;base64,iVBO...).

To try this out, find the image resource within Frames > [Resource] > Images and right click on the image preview to access the context menu, then select ‘Copy Image as Data URL’. []

Data URI filtering

If you've never thought they belong, Data URIs can now be filtered out of the Network tab. Select the Filter icon image alt text to view other resource filter types. []

image alt text

Network Timing bugs fixed

If you saw your image apparently taking 300,000 years to download, our apologies. ;) These incorrect timings for network resources have now been fixed. []

Network recording behavior has more control

The behavior of recording network is a little different. First, the record button acts just like you would expect from Timeline or a CPU profile. And because you'd expect it, if you reload the page while DevTools is open, network recording will automatically start. It'll then turn off, so if you want to capture network activity after page load, turn it on. This makes it easier to visualize your waterfall without late-breaking network requests skew the results. []

DevTools themes now available through extensions

User stylesheets are now available through DevTools Experiments (checkbox: "Allow custom UI themes") which allows a Chrome extension to apply custom styling to DevTools. See Sample DevTools Theme Extension for an example. []

That’s it for this edition of the DevTools digest, if you haven’t already, check out the November edition.

New Web Animations engine in Blink drives CSS Animations & Transitions

By Alex Danilo at

Users expect smooth 60fps animations in modern multi-device UIs. Achieving that level of performance with the web’s current animation primitives can be difficult. Fortunately we’re working on a new Blink animation implementation that just shipped in Chrome Canary!

What’s exciting about this is that it simplifies the internals of Blink and lays the groundwork for inclusion of new API features from the Web Animations 1.0 specification.

Until now, CSS Animations and CSS Transitions had been separate implementations, written independently, that didn’t necessarily play well together. For the past few years, browser implementers have been working together on a next-generation animation model with support for things like synchronization, chaining animations to run in sequence, seeking to arbitrary points in animation time, allowing animations to change speed, reverse and more.

The effort led to the formation of the W3C specification Web Animations 1.0.

The first step from the Blink team in getting Web Animations out into the world is replacing the existing Blink CSS Animations/Transitions C++ implementation with the Web Animations engine. Having reached that milestone now, we’d like as many developers as possible to check nothing’s been broken and more importantly to keep an eye on the implementation effort and give us feedback on what’s good/bad or might need changing.

Next up will be implementation of an API that lets you create, modify, and interrogate animations from JavaScript. The API is designed to let animations run efficiently (by using declarative semantics so Javascript manages creating animations but hands off control to the browser) whilst still exposing full animation control to the JavaScript developer.

We’re looking for active feedback on the proposed API to make sure we haven’t missed any features needed for powerful animation control. As with any new feature, the specification will continue to change, so now is the time to make your voice heard – ideally by subscribing to and contributing to the mailing list (and put [Web Animations] in the subject line so it gets noticed).

Try out the new engine that’s already powering CSS Animations & Transitions now and post any weirdness to the Chromium bug tracker so we know about it.

We’re excited to bring next-generation animation capabilities to Blink and look forwarding to working with other browser developers like WebKit and Mozilla who’ve also committed to implementing the new model.