Emmy award: Bringing premium media to the web

The National Academy of Television, Arts & Science recognized Microsoft with two Technology & Engineering Emmy awards that I was happy to accept on behalf of the Microsoft Edge Web Platform team in 2019 and 2020:

The 70th Technology & Engineering Emmy Awards, Las Vegas; 4/7/2019

John Simmons and I accepting the Emmy on behalf of the Windows and Microsoft Edge teams, respectively.

Today, premium media sites, like Netflix, Amazon Prime, Hulu, and others, use the HTML5 Video, Encrypted Media Extensions (EME), and Media Sources Extensions (MSE) web standards to deliver premium movies and TV experience on web sites, web apps, and TV set boxes using web technologies. But before these web standards, most premium media sites would rely on plug-ins, like Flash, Sliverlight, and others, to deliver these experiences on the web. Unfortunately, plug-ins had many flaws: they require the user to install them before they can view the video and had notoriously poor performance, reliability, and suffered from many security issues.

Starting in 2013, Microsoft worked with other industry partners, like W3C, Netflix, Comcast, and Google, to start developing several web standards that would support these same experiences on the web natively without plug-ins. We first helped standardize the HTML5 Video element, so browsers would natively support a video player without requiring the user install a plug-in. This video element was great for most videos but had some limitations: it used progressive playback where the video would have to be fully downloaded first and the video source was available for anyone to see. While this is okay for most media, content owners that stream premium video want to stream video as quickly and with the highest quality as possible and make sure that only users authorized by the streaming service should be able to view the video.

To solve the first problem, we helped standardize Media Source Extensions to support adaptive streaming. With this web standard and a streaming server, sites could adaptively switch between different media streams so users with good bandwidth would get highest quality and data rate video streams and users with lower bandwidth would get lower quality streams. This ensured that users are less likely to see buffering as their network conditions change or need to wait for the full video to download.

To solve the second problem, we helped standardize Encrypted Media Extension so web sites could leverage content protection systems, like Digital Rights Management (DRM), for web media. Content owners will not stream premium content if it can be easily saved and shared outside the service. With EME, web sites can ensure their premium media content is protected.

On the browser media team, we worked closely with the Windows team to bring these technologies to IE11 and Microsoft Edge. While IE11 was the first browser to implement many of these early web standards, Microsoft Edge continues to provide the best-in-class protected media support. Microsoft Edge often gets highest resolution and bitrate video because it’s the only browser on Windows to use the robust hardware-backed Microsoft PlayReady DRM – as video quality goes up, so does the need for better protection. Sites that rely on hardware-backed PlayReady DRM on Microsoft Edge can stream 1080p or 4k with high dynamic range (HDR) with confidence that their content cannot be stolen, while also giving their users the best battery life because we’re leveraging hardware. To provide developers with technology choice and highest level of compatibility, Microsoft Edge also supports multiple DRM systems, including both Microsoft PlayReady and Google Widevine DRM systems.


One of the Emmy awards given to Microsoft innovators.

I’m very appreciative of the National Academy of Television, Arts & Science for recognizing Microsoft’s contributions in helping make the web better for premium video experiences. It’s great to see how far the web has come in just a few years and I’m looking forward to seeing how we continue to make it even better with more powerful capabilities in the future.


Build 2019: Moving the web forward with Microsoft Edge

Last year, we announced that the next major version of Microsoft Edge will be based off of the Chromium open source project and that we intended to become significant contributors to that open source project in a way that would not just make Microsoft Edge, but other browsers as well, better on both PCs and other devices.

At Build 2019, Gaurav Seth and I went deeper into our plans for how we intended to move the web forward with Microsoft Edge, including how we intended to contribute to Chromium, our ongoing work in web standards, and delivering a consistent set of developers tools and application experiences built using web technologies.


Video link to the Build 2019 session: Moving the web forward with Microsoft Edge

Service Workers: Going beyond the page

Ali and I announce Service Worker and Progressive Web App support on the Microsoft Edge blog:

We’re thrilled to announce that today’s Windows Insider build enables Service Workers by default in Microsoft Edge for the first time.

This is an exciting milestone for us and for the web! These new APIs allow the web to offer better experiences when devices are offline or have limited connectivity, and to send push notifications even when the page or browser isn’t open.

This milestone also establishes the foundation for full-featured Progressive Web App experiences in Microsoft Edge and the Microsoft Store. We’ve carefully tailored our implementation together with Windows and are excited to be the only browser on Windows 10 to provide push handling in the background to optimize for better battery usage. We’ll have more to share about PWAs on Windows in the weeks ahead, so be sure to stay tuned!

We believe Service Workers are a foundational new tool for the web with the potential to usher in a new golden age of web apps. You can try out web experiences powered by Service Workers in Microsoft Edge on starting with today’s Windows Insider release, 17063.

The state of the web

Not too long ago, the web’s capabilities were lagging behind what native apps could do. Browser vendors, standards bodies, and the web community have relentlessly attacked this gap over the last decade, with the goal of enabling richer web experiences.

A particularly egregious sore spot for the web has always been how it handled—or failed to handle—the lack of an internet connection, or a poor-quality connection. Offline and the web never really went well together—in most cases, we were given a frustrating error page that only made it clearer that the web’s greatest asset was also its greatest weakness: the internet. In contrast, native apps are typically designed to provide a good experience even while offline or experiencing spotty service.

On top of that, native apps can re-engage their users with push notifications. On the web, after the browser or the app disappears, so does your ability to deliver relevant information or updates to your users.

In the rearview: Application Cache

By 2012, we were able to do offline on the web by introducing a new standard: Application Cache (or App Cache for short). A site author could list URLs for the browser to keep in a special cache called the App Cache so that when you visited the page you would see something other than an infuriating error page that would make you want to smash your keyboard and throw it against the wall.

Unfortunately, this wasn’t the silver bullet we were looking for in terms of bringing offline to the web. There are more than a few well-documented limitations for App Cache that made it confusing and error prone for many users and web developers. The sum of it all was that there was little control in how the App Cache would work since most of the logic occurred behind-the-scenes in the browser.

That meant if you ran into an issue, it would be exceedingly difficult to understand how to resolve it. There was an obvious need for something that gave greater control to developers by offering the capabilities of App Cache but made it more programmatic and dynamic while doing away with many of its limitations that made it difficult to debug.

Hit Rewind

App Cache left a lot to be desired. For the next swing at enabling offline scenarios, it was clear that browsers needed to provide web developers true control over what would happen when a page and its sub-resources were downloaded, rather than having it automatically and transparently handled in the browser.

With the ability to intercept network requests from a page and to prescribe what to do with each, site authors would be able to respond back to the page with a resource that it could use. Before we get there, it seems that we would need to revisit one of the most fundamental aspects of the web: fetching a resource.

How fetching!

As you may recall from my last post on Fetch, we now have the fetch() method as well as the Request and Response primitives. As a refresher, here’s how you might retrieve some JSON data using the fetch API:

.then(function(response) {
if (response.headers.get('content-type') == 'application/json') {
return response.json();
} else {
throw new TypeError();

view raw


hosted with ❤ by GitHub

Every request that happens on a page (including the initial navigation, CSS, images, scripts and XHR) is defined as a fetch. The fetch() method (as shown in the code sample) is just a way to explicitly initiate a fetch, while implicit fetches occur when loading a page and all of its sub-resources.

Since we’ve unified the concepts of fetching resources across the web platform, we can provide site authors the chance to define their own behavior via a centralized algorithm. So, how do we pass that control over to you?

Service worker: the worker that serves

Web workers have long been a great tool to offload intensive JavaScript to a separate execution context that doesn’t block the UI nor interaction with the page. Given the right conditions, we can repurpose the concept of a web worker to allow a developer to write logic in response to a fetch occurring on a page. This worker wouldn’t just be a web worker, though. It deserves a new name: Service worker.

A service worker, like a web worker, is written in a JavaScript file. In the script, you can define an event listener for the fetch event. This event is special, in that it gets fired every time the page makes a request for a resource. In the handler for the fetch event, a developer will have access to the actual Request being made.

self.onfetch = function(event) {

view raw


hosted with ❤ by GitHub

You can choose to respond with a fetch for the provided Request using the JavaScript APIs which returns a Response back to the page. Doing this essentially follows the typical browser behavior for that request—it’s not intrinsically useful to just do a fetch for the request, ideally it would be more useful if we save previous Response objects for later.

Cache it!

The Cache API allows us to look up a specific Request and get its associated Response object. The APIs give access to a new underlying key/value storage mechanism, where the key is a Request object and the value is a Response object. The underlying caches are separate from the browser’s HTTP cache and are origin-bound (meaning that they are isolated based on scheme://hostname:port) so that you cannot access caches outside of your origin. Each origin can define multiple different caches with different names. The APIs allow you asynchronously open and manipulate the caches by making use of Promises:

caches.open('my-cache').then(function(cache) {
return cache.addAll([

view raw


hosted with ❤ by GitHub

These caches are completely managed by the developer, including updating the entries and purging them when they’re no longer needed – this allows you to rely on what will be there when you may not necessarily be connected to the internet.

Although the Caches API is defined as part of the Service Worker spec, it can also be accessed from the main page.

So now you have two asynchronous storage APIs to choose from: Indexed DB and the Caches API. In general, if what you’re trying to store is URL-addressable, use the Caches API; for everything else, use Indexed DB.

Now that we have a way to save those Response objects for later use, we’re in business!

Back to the worker

With a service worker, we can intercept the request and respond from cache. This gives us the ability to improve page load performance and reliability, as well as to offer an offline experience. You can choose to let the fetch go through to the internet as is, or to get something from the cache using the Cache API.

The first step to using a service worker is to register it on the page. You can do this by first feature-detecting and then calling the necessary APIs:

if (navigator.serviceWorker) {
navigator.serviceWorker.register('sw.js', {scope: '/'})
function (registration) {
console.log('Service worker registered!');
function (err) {
console.error('Installation failed!', err);

view raw


hosted with ❤ by GitHub

As part of the registration, you’ll need to specify the location of the service worker script file and define the scope. The scope is used to define the range of URLs that you want the service worker to control. After a service worker is registered, the browser will keep track of the service worker and the scope it is registered to.

Upon navigating to a page, the browser will check if a service worker is registered for that page based on the scope. If so, the page will go on to use that service worker until it is navigated away or closed. In such a case, the page is said to be controlled by that service worker. Otherwise, the page will instead use the network as usual, and will not be controlled by a service worker.

Upon registration, the service worker won’t control the page that registered it. It will take control if you refresh the page or you open a new page that’s within its scope.

After initiating the registration of a service worker, it will go through the registration process. That will involve going through the different phases of its lifecycle.

The service worker lifecycle

Let’s unpack the different phases of the service worker’s lifecycle, starting with what happens once you try to register it:

  • Installing: This is the first step that any service worker goes through. After the JavaScript file has been downloaded and parsed by the browser, it will run the install event of your script. That’s when you’ll want to get everything ready such as priming your caches.

In the following example, the oninstall event handler in the service worker will create a cache called “static-v1” and add all the static resources of the page to the cache for later use by the fetch handler.

self.oninstall = function(event) {
caches.open('static-v1').then(function(cache) {
return cache.addAll([

view raw


hosted with ❤ by GitHub

  • Installed: At this point, the setup is complete, and the service worker is awaiting all pages/iframes (clients) that are controlled by this service worker registration to be closed so that it can be activated. It could be potentially problematic to change the service worker for pages that are still actively using a previous version of the service worker, so the browser will instead wait until they’ve been navigated away or closed.
  • Activating: Once no clients are controlled by the service worker registration (or if you called the skipWaiting API), the service worker goes to the activating phase. This will run the activate event in the service worker which will give you the opportunity to clean up after the previous workers that may have left things behind, such as stale caches.

In this example, the onactivate event handler in the service worker will remove all caches that are not named “static-v1.”

self.onactivate = function(event) {
var keepList = ['static-v1'];
caches.keys().then(function(cacheNameList) {
return Promise.all(cacheNameList.map(function(cacheName) {
if (keepList.indexOf(cacheName) === 1) {
return caches.delete(cacheName);

view raw


hosted with ❤ by GitHub

  • Activated: Once it’s been activated, the service worker can now handle fetch and other events as well!

In this example, the onfetch event handler in the service worker will respond back to the page with a match from the cache if it exists and if there isn’t an entry in the cache, it will defer to making a fetch to the internet instead. If that fetch fails, it will resort to returning a fallback.

self.onfetch = function(event) {
caches.match(event.request).then(function(response) {
return response || fetch(event.request).catch(function() {
return caches.match('/fallback.htm1');

view raw


hosted with ❤ by GitHub

  • Redundant: The final phase of the service worker is when it is being replaced by another service worker because there’s a new one available that is going to take its place.

There’s more to it: the big picture

So far, we’ve explored the following service worker events: install, activate, and fetch. Install and activate are considered lifetime events while fetch is considered a functional event. What if we could expand on the service worker’s programming model and introduce other functional events that could plug in to it? Given that service workers are event-driven and are not tied down to the lifetime of a page, we could add other events such as push and notificationclick which would present the necessary APIs to enable push notifications on the web.

Push it to the limit

Push notifications provide a mechanism for developers to inform their users in a timely, power-efficient and dependable way, that re-engages them with customized and relevant content. Compared to current web notifications, a push notification can be delivered to a user without needing the browser/app or page to be opened.

The W3C Push API and Notification API go hand-in-hand to enable push notifications in modern browsers. The Push API is used to set up a push subscription and is invoked when a message is pushed to the corresponding service worker. The service worker then is responsible for showing a notification to the user using the Notification API and reacting to user interaction with the notification.

A standardized method of message delivery is also important for the W3C Push API to work consistently across all major browsers where application servers will need to use multiple push services. For instance, Google Chrome and Mozilla Firefox use Firebase Cloud Messaging (FCM) and Mozilla Cloud Services (MCS), respectively while Microsoft Edge relies on the Windows Push Notification Service (WNS) to deliver push messages. To reach reasonable interoperability with other browsers’ messaging services, WNS has now deployed support for the Web Push protocols being finalized within IETF, as well as the Message Encryption spec and the Voluntary Application Server Identification (VAPID) spec for web push. Web developers can now use the Web Push APIs and service workers to provide an interoperable push service on the web.

To start, you’ll first need to make sure your web server is setup to send pushes. The Web-Push open-source library is a great reference for anyone new to web push. The contributors have done a reasonable job in keeping up with the IETF specs. After starting up a node.js server based on the web-push library, you’ll need to setup the VAPID keys. Keep in mind that you’ll need to use HTTPS as it is required for service workers and push. You only need to set up the VAPID keys once which can be generated easily using the corresponding function in the web-push library.

var webpush = require('web-push');
var vapidKeys = { publicKey: 'BL6As_YCGHPf3ZeDbklyVxgvJVb4Tr5qjZFS-J7XzkT5zQNghd9iUBUsqSlVO5znwTsZZrEOx8JFRDJc1JmkymA',
privateKey: 'GnMVDgbtZrqs7tgKEkJaV5aZF8cVjoq7Ncz_TEVI_lo'};

view raw


hosted with ❤ by GitHub

Once that’s all sorted out, it’s time to take advantage of push in your site or app. Once the page loads, the first thing you’ll want to do is get the public key from the application server so that you can set up the push subscription.

function urlBase64ToUint8Array(base64String) {
const padding = '='.repeat((4 base64String.length % 4) % 4);
const base64 = (base64String + padding)
.replace(/\-/g, '+')
.replace(/_/g, '/');
const rawData = window.atob(base64);
const outputArray = new Uint8Array(rawData.length);
for (let i = 0; i < rawData.length; ++i) {
outputArray[i] = rawData.charCodeAt(i);
return outputArray;
.then(function(res) {
res.json().then(function(data) {
var appPubkey = data.key;

view raw


hosted with ❤ by GitHub

With the public key in hand, as before, we’ll need to install the service worker, but this time, we’ll also create a push subscription.

function registerPush(appPubkey) {
if (navigator.serviceWorker) {
.then(function(reg) {
return reg.pushManager.getSubscription().then(function(subscription) {
if (subscription) {
return subscription;
return registration.pushManager.subscribe({
userVisibleOnly: true, applicationServerKey: appPubkey

view raw


hosted with ❤ by GitHub

Before a new push subscription is created, Microsoft Edge will check whether a user granted permission to receive notifications. If not, the user will be prompted by the browser for permission. You can read more about permission management in an earlier post about Web Notifications in Microsoft Edge. From a user’s perspective, it’s not obvious whether a notification will be shown via the page or through a push service, so we are using the same permission for both types of notifications.

To create a push subscription, you’ll need to set the userVisibleOnly option to “true” – meaning a notification must be shown as a result of a push – and provide a valid applicationServerKey. If there is already a push subscription, there is no need to subscribe again.

At any point when a push is received by the client, a corresponding service worker is run to handle the event. As part of this push handling, a notification must be shown so that the user understands that something is potentially happening in the background.

self.onpush = function(event) {
registration.showNotification('WEATHER ADVISORY', {
body: event.data ? event.data.text() : 'no payload',
icon: 'icon.png'

view raw


hosted with ❤ by GitHub

Of course, after a notification is shown, there is still the matter of dealing with when its been clicked. As such, we need to have another event listener in the service worker that would handle this case.

self.onnotificationclick = function(event) {

view raw


hosted with ❤ by GitHub

In this case, we first dismiss the notification and then we can choose to open a window to the intended destination. You’re also able to sort through the already open windows and focus one of those, or perhaps even navigate an existing window.

Push: The Next Generation

As part of our ongoing commitment to expanding the possibilities of the web, Microsoft Edge and PWAs in Windows will handle these service worker push event handlers in the background. That’s right, there’s no need for Microsoft Edge or your PWA to be running for the push to be handled. That’s because we’ve integrated with Windows to allow for a more holistic approach to push notifications. By leveraging Windows’ time-tested process lifetime management, we’re able to offer a system that reacts appropriately to system pressures such as low battery or high CPU and memory usage.

For our users it means better resource management and battery life expectations. For our developers, it means a push event handler that will get to run to completion without interruption from a user action such as closing the browser window or app. Note that a service worker instance that is running in the foreground for the fetch event will not be the same as the one in the background handling the push event.

Notifications in Microsoft Edge and PWAs will be integrated in the Windows Action Center. If you receive a notification and didn’t get the chance to act on it, it will get tucked away into the Action Center for later. That means that notifications never get left unseen. On top of that, the Action Center will group multiple notifications coming from the same domain so that users have an easier time sorting through them.

Service worker: properties

I’d like to take a moment to go over some things you should keep in mind when using service workers in your web app or site. In no particular order, here they are:

  • HTTPS-only. Service workers will not work in HTTP; you will need to use HTTPS. Fortunately, if you’re testing locally, you’re allowed to register service workers on localhost.
  • No DOM access is allowed. As with web workers, you don’t get access to the page’s object model. This means that if you need to change something about the page, you’ll need to use postMessage from the service worker to the page so that you can handle it DOM changes from the page.
  • Executes separate from page. Because these scripts are not tied to the lifetime of a page, it’s important to understand that they do not share the same context as the page. Aside from not having access to the DOM (as stated earlier), they won’t have access to the same variables available on the page.
  • Trumps App Cache. Service workers and App Cache don’t play well together. App Cache will be ignored when service worker is in use. Service workers were meant to give more control to the web developer. Imagine if you had to deal with the magic of App Cache while you’re trying to step through the logic of your service worker.
  • Script can’t be on CDN. The JavaScript file for the service worker can’t be hosted on a Content Distribution Network (CDN), it must be on the same domain as the page. However, if you like, you can import scripts from your CDN.
  • Can be terminated any time. Remember that service workers are meant to be short-lived and their lifetime is tied to events. In particular, service workers have a time limit in which they must finish executing their event handlers. In other cases, the browser or the operating system may choose to terminate a service worker that impacts the battery, CPU, or memory consumption. In either case, it would be prudent to not rely on global variables in the service worker in case a different service worker instance is used on a subsequent event that’s being handled.
  • Only asynchronous requests allowed. Synchronous XHR is not allowed here! Neither is localStorage, so it’s best to make use of Indexed DB and the new Caches API described earlier.
  • Service worker to scope is 1:1. You’ll only be able to have one service worker per scope. That means if you try to register a different service worker for a scope that already has a service worker, that service worker will be updated.


As you can see, service workers are so much more than an HTTP proxy, they are in fact a web app model that enable event-driven JavaScript to run independent of web pages. Service workers were brought in to the web platform as a necessity to solve offline, but it’s clear that they can do so much more as we continue to extend their capabilities to solve other scenarios. Today we have push, but in the future, there will be other exciting capabilities that will bring the web that much closer to offering the captivating and reliable experiences we’ve always wanted.

Go put a worker to work!

So, what are you waiting for? Go and install the latest windows insider preview build and test out service workers in Microsoft Edge today. We’d love to hear your feedback, so please file bugs as you see them!

— Ali Alabbas, Program Manager, Microsoft Edge
— Jatinder Mann, Program Manager, Microsoft Edge