HTML5 Rocks

HTML5 Rocks

Screensharing with WebRTC

By Sam Dutton at

As we reported last week, there's been a lot happening lately with our old friend WebRTC.

Well... here's another first: WebRTC screensharing.

Screengrab of WebRTC screensharing extension, featuring Jake Archibald, Peter Beverloo, Paul Lewis and Sam Dutton

Here's a screencast:

...and here's the code:

In essence, we've built an experimental Chrome extension that uses RTCPeerConnection and chrome.tabCapture to share a live 'video' of a browser tab. If you want to try it out, you'll need Chrome Canary, and you'll need to enable Experimental Extension APIs on the about:flags page.

Our prototype relies heavily on the mighty demo and, to be frank, it's a bit of a hack! But... it's a proof of concept, and it works.

Here's how we did it:

  1. When the user clicks the extension icon (the 'record button' next to the address bar), the extension's background script background.js, appends an iframe to itself, the src of which is In background.js it's only used to get values such as token and room_key. We told you this was a hack :^}! This is a chopped and channeled version of As with the apprtc example, is also used for the remote client.

  2. chrome.browserAction.onClicked.addListener(function(tab) {
    	var currentMode = localStorage["capturing"];
    	var newMode = currentMode === "on" ? "off" : "on";
    	if (newMode === "on"){ // start capture
    	} else { // stop capture
    		chrome.tabs.getSelected(null, function(tab){
    		// set icon, localStorage, etc.
  3. When the iframe has loaded, background.js gets values from it (generated by the app) and calls chrome.tabCapture.capture() to start capturing a live stream of the current tab.

  4. function appendIframe(){
    	iframe = document.createElement("iframe");
    	iframe.onload = function(){
    		iframe.contentWindow.postMessage("sendConfig", "*");
    // serialised config object messaged by iframe when it loads
    window.addEventListener("message", function(event) {
    	if (event.origin !== ""){
     	var config = JSON.parse(;
    	room_link = config.room_link; // the remote peer URL
    	token = config.token; // for messaging via Channel API
    	// more parameter set from config
    function startCapture(){
    	chrome.tabs.getSelected(null, function(tab) {
    		var selectedTabId =;
    		chrome.tabCapture.capture({audio:true, video:true}, handleCapture); // bingo!
  5. Once the live stream is available (in other words, a live 'video' of the current tab), background.js kicks off the peer connection process, and signalling is done via using XHR and Google's Channel API. All in all, it works like the apprtc demo, except that the video stream communicated to the remote peer is from chrome.tabCapture and not getUserMedia().

  6. function handleCapture(stream){
    	localStream = stream; // used by RTCPeerConnection addStream();
    	initialize(); // start signalling and peer connection process
  7. For demo purposes, this prototype extension opens a new tab with the URL provided by, which has a 'room number' query string added. Of course, this URL could be opened on another computer, in another place, and THAT might be the start of something useful!

  8. chrome.tabs.create({url: room_link});

We envisage a lot of interesting use cases for screensharing and, even at this early stage of development, we're impressed at how responsive and stable plugin-free tab capture and sharing can be.

As ever, we welcome your comments: about this extension and about the WebRTC APIs in general. If you want to learn more about WebRTC, check out the HTML5 Rocks article or our Quick Start Guide.

Happy hacking -- and best wishes for 2013 from everyone at HTML5R and WebRTC!

WebRTC hits Firefox, Android and iOS

By Sam Dutton at

A lot has happened with WebRTC over the last few weeks. Time for an update!

In particular, we're really excited to see WebRTC arriving on multiple browsers and platforms.

getUserMedia is available now in Chrome with no flags, as well as Opera, and Firefox Nightly/Aurora (though for Firefox you'll need to set preferences). Take a look at the cross-browser demo of getUserMedia at—and check out Chris Wilson's amazing examples of using getUserMedia as input for Web Audio.

webkitRTCPeerConnection is now in Chrome stable and it's flagless. TURN server support is available in Chrome 24 and above. There's an ultra-simple demo of Chrome's RTCPeerConnection implementation at and a great video chat application at (A word of explanation about the name: after several iterations, it's currently known as webkitRTCPeerConnection. Other names and implementations have been deprecated. When the standards process has stabilised, the webkit prefix will be removed.)

WebRTC has also now been implemented for desktop in Firefox Nightly and Aurora, and for iOS and Android via the Ericsson Bowser browser.


DataChannel is a WebRTC API for high performance, low latency, peer-to-peer communication of arbritary data. The API is simple—similar to WebSocket—but communication occurs directly between browsers, so DataChannel can be much faster than WebSocket even if a relay (TURN) server is required (when 'hole punching' to cope with firewalls and NATs fails).

DataChannel is planned for version 25 of Chrome, behind a flag – though it may miss this version. This will be for experimentation only, may not be fully functional, and communication won't be possible with the Firefox implementation. DataChannel in later versions should be more stable and will be implemented so as to enable interaction with DataChannel in Firefox.

Firefox Nightly/Aurora supports mozGetUserMedia, mozRTCPeerConnection and DataChannel (but don't forget to set your about:config preferences!)

Here's a screenshot of DataChannel running in Firefox:

This demo is at Here's a code snippet:

pc1.onconnection = function() {
  log("pc1 onConnection ");
  dc1 = pc1.createDataChannel("This is pc1",{}); // reliable (TCP-like)
  dc1 = pc1.createDataChannel("This is pc1",{outOfOrderAllowed: true, maxRetransmitNum: 0}); // unreliable (UDP-like)
  log("pc1 created channel " + dc1 + " binarytype = " + dc1.binaryType);
  channel = dc1;
  channel.binaryType = "blob";
  log("pc1 new binarytype = " + dc1.binaryType);

  // Since we create the datachannel, don't wait for onDataChannel!
  channel.onmessage = function(evt) {
    if ( instanceof Blob) {
      fancy_log("*** pc2 sent Blob: " + + ", length=" +,"blue");
    } else {
      fancy_log('pc2 said: ' +, "blue");
  channel.onopen = function() {
    log("pc1 onopen fired for " + channel);
    channel.send("pc1 says Hello...");
    log("pc1 state: " + channel.state);
  channel.onclose = function() {
    log("pc1 onclose fired");
  log("pc1 state:" + channel.readyState);

More information and demos for the Firefox implementation are available from the blog. Basic WebRTC support is due for release in Firefox 18 at the beginning of 2013, and support is planned for additional features including getUserMedia and createOffer/Answer constraints, as well as TURN (to allow communication between browsers behind firewalls).

For more information about WebRTC, see Getting Started With WebRTC. There's even a WebRTC book, available in print and several eBook formats.

Resolution Constraints

Constraints have been implemented in Chrome 24 and above. These can be used to set values for video resolution for getUserMedia() and RTCPeerConnection addStream() calls.

There's an example at Play around with different constraints by setting a breakpoint and tweaking values.

A couple of gotchas... getUserMedia constraints set in one browser tab affect constraints for all tabs opened subsequently. Setting a disallowed value for constraints gives a rather cryptic error message:

navigator.getUserMedia error:  NavigatorUserMediaError {code: 1, PERMISSION_DENIED: 1}

Likewise the error if you try to use getUserMedia from the local file system, not on a server!

Streaming screen capture

Tab Capture is now available in the Chrome Dev channel. This makes it possible to capture the visible area of the tab as a stream, which can then be used locally, or with RTCPeerConnection's addStream(). Very useful for sceencasting and web page sharing. For more information see the WebRTC Tab Content Capture proposal.

Keep us posted by commenting on this update: we'd love to hear what you're doing with these APIs.

...and don't forget to file any bugs you encounter at!

Live Web Audio Input Enabled!

By Chris Wilson at

I'm really excited by a new feature that went in to yesterday's Chrome Canary build (23.0.1270.0) - the ability to get low-latency access to live audio from a microphone or other audio input on OSX! (This has not yet been enabled on Windows - but don't worry, we're working on it!)

[UPDATE Oct 8, 2012: live audio input is now enabled for Windows, as long as the input and output device are using the same sample rate!]

To enable this, you need to go into chrome://flags/ and enable the "Web Audio Input" item near the bottom, and relaunch the browser; now you're ready to roll! Note: If you're using a microphone, you may need to use headphones for any output in order to avoid feedback. If you are using a different audio source, such as a guitar or external audio feed, or there's no audio output from the demo, this may not be a problem. You can test out live audio input by checking out the spectrum of your input using the live input visualizer.

For those Web Audio coders among you, here's how to request the audio input stream, and get a node to connect to any processing graph you like!

// success callback when requesting audio input stream
function gotStream(stream) {
    window.AudioContext = window.AudioContext || window.webkitAudioContext;
    var audioContext = new AudioContext();

    // Create an AudioNode from the stream.
    var mediaStreamSource = audioContext.createMediaStreamSource( stream );

    // Connect it to the destination to hear yourself (or any other node for processing!)
    mediaStreamSource.connect( audioContext.destination );

navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
navigator.getUserMedia( {audio:true}, gotStream );

There are many rich possibilities for low-latency audio input, particularly in the musical space. You can see a quick example of how to make use of this in a simple pitch detector I threw together - try plugging in a guitar, or even just whistling into the microphone.

And, as promised, I've added live audio as an input source to the Vocoder I wrote for Google IO - just select "live input" under modulator. You may need to adjust the Modulator Gain and the Synth Level. There's a slight lag due to processing (not due to input latency). Now that I have live audio input, it's time for another round of tweaking!

Finally, you may want to take a look at the collection of my web audio demos - by the time you read this, I may have some more live audio demos up!