Emmy award: Bringing premium media to the web

On April 8th, the National Academy of Television, Arts & Science recognized Microsoft and other industry leaders with a Technology & Engineering Emmy award for our contributions in standardizing HTML5, Encrypted Media Extensions (EME), and Media Sources Extensions (MSE) for a Full TV Experience.

The 70th Technology & Engineering Emmy Awards, Las Vegas; 4/7/2019

John Simmons and I accepting the Emmy on behalf of the Windows and Microsoft Edge teams, respectively.

Today, premium media sites, like Netflix, Amazon Prime, Hulu, and others, use the HTML5 Video, Encrypted Media Extensions (EME), and Media Sources Extensions (MSE) web standards to deliver premium movies and TV experience on web sites, web apps, and TV set boxes using web technologies. But before these web standards, most premium media sites would rely on plug-ins, like Flash, Sliverlight, and others, to deliver these experiences on the web. Unfortunately, plug-ins had many flaws: they require the user to install them before they can view the video and had notoriously poor performance, reliability, and suffered from many security issues.

Starting in 2013, Microsoft worked with other industry partners, like W3C, Netflix, Comcast, and Google, to start developing several web standards that would support these same experiences on the web natively without plug-ins. We first helped standardize the HTML5 Video element, so browsers would natively support a video player without requiring the user install a plug-in. This video element was great for most videos but had some limitations: it used progressive playback where the video would have to be fully downloaded first and the video source was available for anyone to see. While this is okay for most media, content owners that stream premium video want to stream video as quickly and with the highest quality as possible and make sure that only users authorized by the streaming service should be able to view the video.

To solve the first problem, we helped standardize Media Source Extensions to support adaptive streaming. With this web standard and a streaming server, sites could adaptively switch between different media streams so users with good bandwidth would get highest quality and data rate video streams and users with lower bandwidth would get lower quality streams. This ensured that users are less likely to see buffering as their network conditions change or need to wait for the full video to download.

To solve the second problem, we helped standardize Encrypted Media Extension so web sites could leverage content protection systems, like Digital Rights Management (DRM), for web media. Content owners will not stream premium content if it can be easily saved and shared outside the service. With EME, web sites can ensure their premium media content is protected.

On the browser media team, we worked closely with the Windows team to bring these technologies to IE11 and Microsoft Edge. While IE11 was the first browser to implement many of these early web standards, Microsoft Edge continues to provide the best-in-class protected media support. Microsoft Edge often gets highest resolution and bitrate video because it’s the only browser on Windows to use the robust hardware-backed Microsoft PlayReady DRM – as video quality goes up, so does the need for better protection. Sites that rely on hardware-backed PlayReady DRM on Microsoft Edge can stream 1080p or 4k with high dynamic range (HDR) with confidence that their content cannot be stolen, while also giving their users the best battery life because we’re leveraging hardware. To provide developers with technology choice and highest level of compatibility, Microsoft Edge also supports multiple DRM systems, including both Microsoft PlayReady and Google Widevine DRM systems.


One of the Emmy awards given to Microsoft innovators.

I’m very appreciative of the National Academy of Television, Arts & Science for recognizing Microsoft’s contributions in helping make the web better for premium video experiences. It’s great to see how far the web has come in just a few years and I’m looking forward to seeing how we continue to make it even better with more powerful capabilities in the future.

Build 2019: Moving the web forward with Microsoft Edge

Last year, we announced that the next major version of Microsoft Edge will be based off of the Chromium open source project and that we intended to become significant contributors to that open source project in a way that would not just make Microsoft Edge, but other browsers as well, better on both PCs and other devices.

At Build 2019, Gaurav Seth and I went deeper into our plans for how we intended to move the web forward with Microsoft Edge, including how we intended to contribute to Chromium, our ongoing work in web standards, and delivering a consistent set of developers tools and application experiences built using web technologies.


Video link to the Build 2019 session: Moving the web forward with Microsoft Edge

What to expect in the new Microsoft Edge Insiders Channel

John and I share what to expect in the new Microsoft Edge Insiders Channel on the Microsoft Edge Dev blog.

Today we are shipping the first Dev and Canary channel builds of the next version of Microsoft Edge, based on the Chromium open-source project. We’re excited to be sharing this work at such an early stage in our development process. We invite you to try out the preview today on your devices, and we look forward to working together with the Microsoft Edge Insider community to make browsing the best experience possible for everyone.

In this post, we’ll walk you through how the new release channels work, and share a closer look at our early work in the Chromium open source project, as
well as what’s coming next.

Introducing the Microsoft Edge Insider Channels

The new Microsoft Edge builds are available through preview channels that we call “Microsoft Edge Insider Channels.” We are starting by launching the first two Microsoft Edge Insider Channels, Canary and Dev, which you can download and try at the
Microsoft Edge Insider site. These channels are available starting today on all supported versions of Windows 10, with more platforms coming soon.

Screenshot of download page showing three Microsoft Edge Insider Channels - Beta Channel, Dev Channel, and Canary Channel

Canary channel will be updated daily, and Dev channel will be updated weekly. You can even choose to install multiple channels side-by-side for testing—they will have separate icons and names so you can tell them apart. Support for other platforms, like Windows 7, Windows 8.1, macOS, and other channels, like Beta and Stable, will come later.

Every night, we produce a build of Microsoft Edge―if it passes automated testing, we’ll release it to the Canary channel. We use this same channel internally to validate bug fixes and test brand new features. The Canary channel is truly the bleeding edge, so you may discover bugs before we’ve had a chance to discover and fix them. If you’re eager for the latest bits and don’t mind risking a bug or two, this is the channel for you.

If you prefer a build with slightly more testing, you might be interested in the Dev channel. The Dev channel is still relatively fresh―it’s the best build of the week from the Canary channel. We look at several sources, like user feedback, automated test results, performance metrics, and telemetry, to choose the right Canary build to promote to the Dev channel. If you want to use the latest development version of Microsoft Edge as a daily driver, this is the channel for you. We expect most users will be on the Dev channel.

Later, we will also introduce the Beta and Stable channels. The Beta channel reflects a significantly more stable release and will be a good target for Enterprises and IT Pros to start piloting the next version of Microsoft Edge.

We are not changing the existing version of Microsoft Edge installed on your devices at this time – it will continue to work side by side with the builds from any of the Microsoft Edge Insider Channels.

Adopting and contributing to the Chromium open source project

When we initially announced our decision to adopt Chromium as the foundation for future versions of Microsoft Edge, we published a set of open source principles and declared our intent to contribute to the Chromium project to make Microsoft Edge and other Chromium-based browsers better on PCs and other devices.

While we will continue to focus on delivering a world class browsing experience with Microsoft Edge’s user experience and connected services, when it comes to improving the web platform, our default position will be to contribute to the Chromium project.

We still have a lot to learn as we increase our use of and contributions to Chromium, but we have received great support from Chromium engineers in helping us get involved in this project, and we’re pleased to have landed some modest but meaningful contributions already. Our plan is to continue working in Chromium rather than creating a parallel project, to avoid any risk of fragmenting the community.

Our early contributions include landing over 275 commits into the Chromium project since we joined this community in December. We also have started to make progress on some of the initial areas of focus we had shared:


We are committed to building a more accessible web platform for all users. Today, Microsoft Edge is the only browser to earn a perfect score on the HTML5Accessibility browser benchmark, and we’re hoping to bring those contributions to the Chromium project and improve web experiences for all users.

  • Modern accessibility APIs. To enable a better accessibility experience for screen readers, like Windows Narrator, magnifiers, braille displays, and other accessibility tools, we’ve shared our intent to implement support for the Microsoft UI Automation interfaces, a more modern and secure Windows accessibility framework, in Chromium. We’re partnering with Google’s Accessibility team and other Chromium engineers to land commits and expect the full feature to be completed later this year.
  • High contrast. To ensure our customers have the best readability experience, we’re working in the W3C CSS working group to standardize the high-contrast CSS Media query and have shared our intent to implement it in Chromium. This will allow customers to use the Windows Ease of Access settings to select their
    preferred color contrast settings to improve content readability on Windows devices.
  • HTML video caption styling. We’ve partnered with Chromium engineers to land support for Windows Ease of Access settings to improve caption readability on Windows 10.
  • Caret browsing. For customers who use their keyboard to navigate the web and select text, we’ve shared our intent to implement caret browsingin Chromium.
  • We’re starting to work with our Chromium counterparts to improve the accessibility of native web controls, like media and input controls. Over time we expect this work will help Chromium earn a perfect score on the HTML5Accessibility browser benchmark.


We’ve been collaborating with Google engineers to enable Chromium to run natively on Windows on ARM devices starting with Chromium 73. With these contributions, Chromium-based browsers will soon be able to ship native implementations for ARM-based Windows 10 PCs, significantly improving their performance and battery life.


To help our customers with touch devices get the best possible experience, we’ve implemented better support for Windows touch keyboard in Chromium, now supporting touch text suggestions as you type and “shape writing” that lets you type by swiping over keys without releasing your finger.


Microsoft Edge is known for class-leading scrolling experiences on the web today, and we’re collaborating closely with Chromium engineers to make touchpad, touch, mouse wheel, keyboard, and sidebar scrolling as smooth as possible. We’re still early in this investigation, but have started sharing some ideas in this area.


Premium media sites use the encrypted media extensions (EME) web standard and digital rights management (DRM) systems to protect streaming media content so that it can only be played by users authorized by the streaming service. In fact, Microsoft and other industry partners were recognized with a Technology & Engineering Emmy award yesterday for helping bring premium media to the web through this and other web standards. To provide users with the highest level of compatibility and web developers with technology choice, Microsoft Edge now supports both Microsoft PlayReady and Google Widevine DRM systems.

While Microsoft Edge often gets highest resolution and bitrate video because it uses the robust hardware-backed Microsoft PlayReady DRM, there are some sites that only support the Google Widevine DRM system. Sites that rely on hardware-backed PlayReady DRM on Microsoft Edge will be able to continue to stream 1080p or 4k with high dynamic range (HDR) or Dolby Vision, while those that only support Widevine will just work in Microsoft Edge for the first time.

We also want to help contribute improvements to video playback power efficiency that many of our Microsoft Edge users have come to expect. We’re early in these investigations but will be partnering closely with the Chromium team on how we can help improve this space further.

Windows Hello

Microsoft Edge supports the Windows Hello authenticator as a more personal and secure way to use biometrics authentication on the web for password-less and two-factor authentication scenarios. We’ve worked with the Chromium team to land Windows Hello support in the Web Authentication API in Chromium 73+ on the latest Windows 10 Insider Preview release.

Evolving the web through standards

While we’re participating in the Chromium open source project, we still believe the evolution of the open web is best served though the standards communities, and the open web benefits from open debate from a wide variety of perspectives.

We are continuing to remain deeply engaged in standards discussions where the perspectives of vendors developing different browsers and the larger web community can be heard and considered. You can keep track of all Microsoft explainer documents on the Microsoft Edge Explainers GitHub.

HTML Modules

For example, we recently introduced the HTML Modules proposal, which is now being developed in the W3C and WHATWG Web Components Incubation Groups.

We’ve heard from web developers that while ES6 Script Modules are a great way for developers to componentize their code and create better dependency management systems, the current approach doesn’t help developers who use declarative HTML markup. This has forced developers to re-write their code to generate markup dynamically.

We’ve taken lessons learned from HTML Imports to introduce an extension of the ES6 Script Modules system to include HTML Modules. Considering the early support we’ve received on this feature from standards discussions, we’ve also shared our intent to implement this feature in Chromium.

User Agent String

With Microsoft Edge adopting Chromium, we are changing our user agent string to closely resemble that of the Chromium user agent string with the addition of the “Edg” token. If you’re blocking site access on user agent strings, please update your logic to treat this string as another Chromium-based browser. Below is the user agent string for the latest Dev Channel build of Microsoft Edge:


Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.48 Safari/537.36 Edg/


We’ve selected the “Edg” token to avoid compatibility issues that may be caused by using the string “Edge,” which is used by the current version of Microsoft Edge based on EdgeHTML. The “Edg” token is also consistent with existing tokens used on iOS and Android. We recommend that developers use feature detection where possible and avoid browser version detection through the user agent string, as it results in more maintenance and fragile code.

User Experience

We are committed to building a world class browser with Microsoft Edge through differentiated user experience features and connected services. With this initial release, we have made a number of changes to the user interface to make our product feel more like Microsoft Edge.

However, you will continue to see the look and feel of the browser evolve in future releases as we iterate and listen to customer feedback. We do not plan to contribute user experience or Microsoft service changes to Chromium, since browser vendors generally like to make their own decisions in these areas.

We know that this initial release is still missing a few features that are available in the current version of Microsoft Edge. We’re in the early stages and are intentionally focusing on fundamentals as we continue to work towards a complete feature set.

Over time, we will roll out new features and run experiments to gauge user interest and satisfaction, and to assess the quality of each new feature or improvement. This will help us ensure that all new features address our customers’ needs in the best possible way and meet our quality standards.

Integration with Microsoft services

While the next version of Microsoft Edge will be based on Chromium, we intend to use the best of Microsoft wherever we can, including our services integrations. Some of these services integrations include:

  • Bing Search powers search and address bar suggestions by default.
  • Windows Defender SmartScreen delivers best-in-class phishing and malware protection when navigating to sites and downloading content.
  • Microsoft Account service and Azure Active Directory can now be used to sign-in to the browser to help you manage both your personal and work accounts. You can even use multiple identities at the same time in different browser sessions.
  • Microsoft Activity Feed Service synchronizes your data across Microsoft Edge preview builds. We currently synchronize your favorites across your Windows 10 desktop devices running Microsoft Edge preview builds. In future builds, we
    will also sync passwords, browsing history, and other settings across all supported platforms, including Microsoft Edge on iOS and Android.
  • Microsoft News powers the new tab experience, giving you the choice of an inspirational theme with vivid Bing images, a focused theme that helps you get straight to work, or a more news focused informational theme.


Getting your feedback is an important step in helping us make a better browser – we consider it essential to create the best possible browsing experience. If you run into any issues or have feedback, please use the “Send Feedback” tool in Microsoft Edge. Simply click the smiley face next to the Menu button and let us know what you like or if there’s something we can improve.

For web developers, if you encounter an issue that reproduces in Chromium, it’s best to file a Chromium bug. For problems in the existing version of Microsoft Edge, please continue to use the EdgeHTML Issue Tracker.

You can also find the latest information on the next version of Microsoft Edge and get in touch with the product team to share feedback or get help on the Microsoft Edge Insider site.

We’re delighted to share our first Canary and Dev builds of the next version of Microsoft Edge! We hope you’ll try the preview out today, and we look forward to hearing your feedback in the Microsoft Edge Insider community.

Jatinder Mann, Group Program Manager, Web Platform
John Hazen, Group Program Manager, Operations

Service Workers: Going beyond the page

Ali and I announce Service Worker and Progressive Web App support on the Microsoft Edge blog:

We’re thrilled to announce that today’s Windows Insider build enables Service Workers by default in Microsoft Edge for the first time.

This is an exciting milestone for us and for the web! These new APIs allow the web to offer better experiences when devices are offline or have limited connectivity, and to send push notifications even when the page or browser isn’t open.

This milestone also establishes the foundation for full-featured Progressive Web App experiences in Microsoft Edge and the Microsoft Store. We’ve carefully tailored our implementation together with Windows and are excited to be the only browser on Windows 10 to provide push handling in the background to optimize for better battery usage. We’ll have more to share about PWAs on Windows in the weeks ahead, so be sure to stay tuned!

We believe Service Workers are a foundational new tool for the web with the potential to usher in a new golden age of web apps. You can try out web experiences powered by Service Workers in Microsoft Edge on starting with today’s Windows Insider release, 17063.

The state of the web

Not too long ago, the web’s capabilities were lagging behind what native apps could do. Browser vendors, standards bodies, and the web community have relentlessly attacked this gap over the last decade, with the goal of enabling richer web experiences.

A particularly egregious sore spot for the web has always been how it handled—or failed to handle—the lack of an internet connection, or a poor-quality connection. Offline and the web never really went well together—in most cases, we were given a frustrating error page that only made it clearer that the web’s greatest asset was also its greatest weakness: the internet. In contrast, native apps are typically designed to provide a good experience even while offline or experiencing spotty service.

On top of that, native apps can re-engage their users with push notifications. On the web, after the browser or the app disappears, so does your ability to deliver relevant information or updates to your users.

In the rearview: Application Cache

By 2012, we were able to do offline on the web by introducing a new standard: Application Cache (or App Cache for short). A site author could list URLs for the browser to keep in a special cache called the App Cache so that when you visited the page you would see something other than an infuriating error page that would make you want to smash your keyboard and throw it against the wall.

Unfortunately, this wasn’t the silver bullet we were looking for in terms of bringing offline to the web. There are more than a few well-documented limitations for App Cache that made it confusing and error prone for many users and web developers. The sum of it all was that there was little control in how the App Cache would work since most of the logic occurred behind-the-scenes in the browser.

That meant if you ran into an issue, it would be exceedingly difficult to understand how to resolve it. There was an obvious need for something that gave greater control to developers by offering the capabilities of App Cache but made it more programmatic and dynamic while doing away with many of its limitations that made it difficult to debug.

Hit Rewind

App Cache left a lot to be desired. For the next swing at enabling offline scenarios, it was clear that browsers needed to provide web developers true control over what would happen when a page and its sub-resources were downloaded, rather than having it automatically and transparently handled in the browser.

With the ability to intercept network requests from a page and to prescribe what to do with each, site authors would be able to respond back to the page with a resource that it could use. Before we get there, it seems that we would need to revisit one of the most fundamental aspects of the web: fetching a resource.

How fetching!

As you may recall from my last post on Fetch, we now have the fetch() method as well as the Request and Response primitives. As a refresher, here’s how you might retrieve some JSON data using the fetch API:

Every request that happens on a page (including the initial navigation, CSS, images, scripts and XHR) is defined as a fetch. The fetch() method (as shown in the code sample) is just a way to explicitly initiate a fetch, while implicit fetches occur when loading a page and all of its sub-resources.

Since we’ve unified the concepts of fetching resources across the web platform, we can provide site authors the chance to define their own behavior via a centralized algorithm. So, how do we pass that control over to you?

Service worker: the worker that serves

Web workers have long been a great tool to offload intensive JavaScript to a separate execution context that doesn’t block the UI nor interaction with the page. Given the right conditions, we can repurpose the concept of a web worker to allow a developer to write logic in response to a fetch occurring on a page. This worker wouldn’t just be a web worker, though. It deserves a new name: Service worker.

A service worker, like a web worker, is written in a JavaScript file. In the script, you can define an event listener for the fetch event. This event is special, in that it gets fired every time the page makes a request for a resource. In the handler for the fetch event, a developer will have access to the actual Request being made.

You can choose to respond with a fetch for the provided Request using the JavaScript APIs which returns a Response back to the page. Doing this essentially follows the typical browser behavior for that request—it’s not intrinsically useful to just do a fetch for the request, ideally it would be more useful if we save previous Response objects for later.

Cache it!

The Cache API allows us to look up a specific Request and get its associated Response object. The APIs give access to a new underlying key/value storage mechanism, where the key is a Request object and the value is a Response object. The underlying caches are separate from the browser’s HTTP cache and are origin-bound (meaning that they are isolated based on scheme://hostname:port) so that you cannot access caches outside of your origin. Each origin can define multiple different caches with different names. The APIs allow you asynchronously open and manipulate the caches by making use of Promises:

These caches are completely managed by the developer, including updating the entries and purging them when they’re no longer needed – this allows you to rely on what will be there when you may not necessarily be connected to the internet.

Although the Caches API is defined as part of the Service Worker spec, it can also be accessed from the main page.

So now you have two asynchronous storage APIs to choose from: Indexed DB and the Caches API. In general, if what you’re trying to store is URL-addressable, use the Caches API; for everything else, use Indexed DB.

Now that we have a way to save those Response objects for later use, we’re in business!

Back to the worker

With a service worker, we can intercept the request and respond from cache. This gives us the ability to improve page load performance and reliability, as well as to offer an offline experience. You can choose to let the fetch go through to the internet as is, or to get something from the cache using the Cache API.

The first step to using a service worker is to register it on the page. You can do this by first feature-detecting and then calling the necessary APIs:

As part of the registration, you’ll need to specify the location of the service worker script file and define the scope. The scope is used to define the range of URLs that you want the service worker to control. After a service worker is registered, the browser will keep track of the service worker and the scope it is registered to.

Upon navigating to a page, the browser will check if a service worker is registered for that page based on the scope. If so, the page will go on to use that service worker until it is navigated away or closed. In such a case, the page is said to be controlled by that service worker. Otherwise, the page will instead use the network as usual, and will not be controlled by a service worker.

Upon registration, the service worker won’t control the page that registered it. It will take control if you refresh the page or you open a new page that’s within its scope.

After initiating the registration of a service worker, it will go through the registration process. That will involve going through the different phases of its lifecycle.

The service worker lifecycle

Let’s unpack the different phases of the service worker’s lifecycle, starting with what happens once you try to register it:

  • Installing: This is the first step that any service worker goes through. After the JavaScript file has been downloaded and parsed by the browser, it will run the install event of your script. That’s when you’ll want to get everything ready such as priming your caches.

In the following example, the oninstall event handler in the service worker will create a cache called “static-v1” and add all the static resources of the page to the cache for later use by the fetch handler.

  • Installed: At this point, the setup is complete, and the service worker is awaiting all pages/iframes (clients) that are controlled by this service worker registration to be closed so that it can be activated. It could be potentially problematic to change the service worker for pages that are still actively using a previous version of the service worker, so the browser will instead wait until they’ve been navigated away or closed.
  • Activating: Once no clients are controlled by the service worker registration (or if you called the skipWaiting API), the service worker goes to the activating phase. This will run the activate event in the service worker which will give you the opportunity to clean up after the previous workers that may have left things behind, such as stale caches.

In this example, the onactivate event handler in the service worker will remove all caches that are not named “static-v1.”

  • Activated: Once it’s been activated, the service worker can now handle fetch and other events as well!

In this example, the onfetch event handler in the service worker will respond back to the page with a match from the cache if it exists and if there isn’t an entry in the cache, it will defer to making a fetch to the internet instead. If that fetch fails, it will resort to returning a fallback.

  • Redundant: The final phase of the service worker is when it is being replaced by another service worker because there’s a new one available that is going to take its place.

There’s more to it: the big picture

So far, we’ve explored the following service worker events: install, activate, and fetch. Install and activate are considered lifetime events while fetch is considered a functional event. What if we could expand on the service worker’s programming model and introduce other functional events that could plug in to it? Given that service workers are event-driven and are not tied down to the lifetime of a page, we could add other events such as push and notificationclick which would present the necessary APIs to enable push notifications on the web.

Push it to the limit

Push notifications provide a mechanism for developers to inform their users in a timely, power-efficient and dependable way, that re-engages them with customized and relevant content. Compared to current web notifications, a push notification can be delivered to a user without needing the browser/app or page to be opened.

The W3C Push API and Notification API go hand-in-hand to enable push notifications in modern browsers. The Push API is used to set up a push subscription and is invoked when a message is pushed to the corresponding service worker. The service worker then is responsible for showing a notification to the user using the Notification API and reacting to user interaction with the notification.

A standardized method of message delivery is also important for the W3C Push API to work consistently across all major browsers where application servers will need to use multiple push services. For instance, Google Chrome and Mozilla Firefox use Firebase Cloud Messaging (FCM) and Mozilla Cloud Services (MCS), respectively while Microsoft Edge relies on the Windows Push Notification Service (WNS) to deliver push messages. To reach reasonable interoperability with other browsers’ messaging services, WNS has now deployed support for the Web Push protocols being finalized within IETF, as well as the Message Encryption spec and the Voluntary Application Server Identification (VAPID) spec for web push. Web developers can now use the Web Push APIs and service workers to provide an interoperable push service on the web.

To start, you’ll first need to make sure your web server is setup to send pushes. The Web-Push open-source library is a great reference for anyone new to web push. The contributors have done a reasonable job in keeping up with the IETF specs. After starting up a node.js server based on the web-push library, you’ll need to setup the VAPID keys. Keep in mind that you’ll need to use HTTPS as it is required for service workers and push. You only need to set up the VAPID keys once which can be generated easily using the corresponding function in the web-push library.

Once that’s all sorted out, it’s time to take advantage of push in your site or app. Once the page loads, the first thing you’ll want to do is get the public key from the application server so that you can set up the push subscription.

With the public key in hand, as before, we’ll need to install the service worker, but this time, we’ll also create a push subscription.

Before a new push subscription is created, Microsoft Edge will check whether a user granted permission to receive notifications. If not, the user will be prompted by the browser for permission. You can read more about permission management in an earlier post about Web Notifications in Microsoft Edge. From a user’s perspective, it’s not obvious whether a notification will be shown via the page or through a push service, so we are using the same permission for both types of notifications.

To create a push subscription, you’ll need to set the userVisibleOnly option to “true” – meaning a notification must be shown as a result of a push – and provide a valid applicationServerKey. If there is already a push subscription, there is no need to subscribe again.

At any point when a push is received by the client, a corresponding service worker is run to handle the event. As part of this push handling, a notification must be shown so that the user understands that something is potentially happening in the background.

Of course, after a notification is shown, there is still the matter of dealing with when its been clicked. As such, we need to have another event listener in the service worker that would handle this case.

In this case, we first dismiss the notification and then we can choose to open a window to the intended destination. You’re also able to sort through the already open windows and focus one of those, or perhaps even navigate an existing window.

Push: The Next Generation

As part of our ongoing commitment to expanding the possibilities of the web, Microsoft Edge and PWAs in Windows will handle these service worker push event handlers in the background. That’s right, there’s no need for Microsoft Edge or your PWA to be running for the push to be handled. That’s because we’ve integrated with Windows to allow for a more holistic approach to push notifications. By leveraging Windows’ time-tested process lifetime management, we’re able to offer a system that reacts appropriately to system pressures such as low battery or high CPU and memory usage.

For our users it means better resource management and battery life expectations. For our developers, it means a push event handler that will get to run to completion without interruption from a user action such as closing the browser window or app. Note that a service worker instance that is running in the foreground for the fetch event will not be the same as the one in the background handling the push event.

Notifications in Microsoft Edge and PWAs will be integrated in the Windows Action Center. If you receive a notification and didn’t get the chance to act on it, it will get tucked away into the Action Center for later. That means that notifications never get left unseen. On top of that, the Action Center will group multiple notifications coming from the same domain so that users have an easier time sorting through them.

Service worker: properties

I’d like to take a moment to go over some things you should keep in mind when using service workers in your web app or site. In no particular order, here they are:

  • HTTPS-only. Service workers will not work in HTTP; you will need to use HTTPS. Fortunately, if you’re testing locally, you’re allowed to register service workers on localhost.
  • No DOM access is allowed. As with web workers, you don’t get access to the page’s object model. This means that if you need to change something about the page, you’ll need to use postMessage from the service worker to the page so that you can handle it DOM changes from the page.
  • Executes separate from page. Because these scripts are not tied to the lifetime of a page, it’s important to understand that they do not share the same context as the page. Aside from not having access to the DOM (as stated earlier), they won’t have access to the same variables available on the page.
  • Trumps App Cache. Service workers and App Cache don’t play well together. App Cache will be ignored when service worker is in use. Service workers were meant to give more control to the web developer. Imagine if you had to deal with the magic of App Cache while you’re trying to step through the logic of your service worker.
  • Script can’t be on CDN. The JavaScript file for the service worker can’t be hosted on a Content Distribution Network (CDN), it must be on the same domain as the page. However, if you like, you can import scripts from your CDN.
  • Can be terminated any time. Remember that service workers are meant to be short-lived and their lifetime is tied to events. In particular, service workers have a time limit in which they must finish executing their event handlers. In other cases, the browser or the operating system may choose to terminate a service worker that impacts the battery, CPU, or memory consumption. In either case, it would be prudent to not rely on global variables in the service worker in case a different service worker instance is used on a subsequent event that’s being handled.
  • Only asynchronous requests allowed. Synchronous XHR is not allowed here! Neither is localStorage, so it’s best to make use of Indexed DB and the new Caches API described earlier.
  • Service worker to scope is 1:1. You’ll only be able to have one service worker per scope. That means if you try to register a different service worker for a scope that already has a service worker, that service worker will be updated.


As you can see, service workers are so much more than an HTTP proxy, they are in fact a web app model that enable event-driven JavaScript to run independent of web pages. Service workers were brought in to the web platform as a necessity to solve offline, but it’s clear that they can do so much more as we continue to extend their capabilities to solve other scenarios. Today we have push, but in the future, there will be other exciting capabilities that will bring the web that much closer to offering the captivating and reliable experiences we’ve always wanted.

Go put a worker to work!

So, what are you waiting for? Go and install the latest windows insider preview build and test out service workers in Microsoft Edge today. We’d love to hear your feedback, so please file bugs as you see them!

— Ali Alabbas, Program Manager, Microsoft Edge
— Jatinder Mann, Program Manager, Microsoft Edge

Get better quality video with Microsoft Edge

My team and I wrote this post on the Window Experience blog:

When it comes to video, the closer to the hardware, the better.  From video hardware acceleration to PlayReady Content Protection and the Protected Media Path, Windows 10 is designed to provide the highest quality, most secure, and most power-efficient video playback available on any version of Windows.  Microsoft Edge has been engineered to optimize for and take advantage of these Windows 10 built-in media capabilities, providing the best video experience of any browser on Windows 10 based on our data and testing. So go ahead, binge watch your favorite shows on Microsoft Edge!

Most Power Efficient Video Playback

As we recently blogged, you get more out of your battery with Microsoft Edge.  This time-lapse video test shows Microsoft Edge lasting three hours longer than Google Chrome streaming the same content side by side on identical Surface Book machines.  Our results have shown that Microsoft Edge outlasts the rest, delivering 17%-70% more battery life than the competition. Today, we are publishing details on the test methodology that was used and in this post, we’ll dig into the technologies that make Microsoft Edge so much more efficient.

Battery life comparison

Microsoft Edge has the most power efficient video playback because it was engineered to take advantage of Windows 10 platform features that keep the CPU in low power states during video playback.  It does this by offloading CPU intensive video processing operations to power efficient peripheral hardware found in modern PCs and mobile devices. This starts with the use of Microsoft DirectX video acceleration (DXVA) to offload decoding of compressed video.  For rendering, Microsoft Edge also works with Multiplane overlay display hardware and sophisticated graphics and UI compositing features to offload video rendering operations. This significantly reduces memory bandwidth required for video processing and compositing at the display.

CPU management in the Windows 10 media stack keeps the CPU running in the lowest power states possible, without compromising UI responsiveness.  It is also able to run the display at lower refresh rates during full screen playback of film based content.  This saves power by reducing memory bandwidth and improves the quality of film playback by reducing video judder caused by the conversion of the film frame rate (24 Hz, for example) on displays running at 60 Hz.  And Microsoft Edge also takes advantage of a feature of the Windows 10 audio stack to offload audio stream processing from the main CPU to dedicated power-efficient audio processing hardware.

Power savings from these features are available to other browsers, but it requires other browser vendors to optimize performance on Windows devices, while Microsoft Edge was designed to provide these power savings innately.  And to be clear, the power difference playing higher quality content, like 1080p, becomes even greater.  Tight integration with Windows media, graphics, and composition stacks allows Microsoft Edge to render the highest quality content with minimal power draw.

Higher Quality Video

In our video tests, not only was Microsoft Edge the most power efficient, but the premium video site we used also sent higher resolution and bitrate video to Microsoft Edge compared to the other browsers.  Here are the details:

Maximum Resolution Maximum Bitrate
Microsoft Edge (EdgeHTML 13.10586) 1080p 7500
Opera 38 720p 4420
Mozilla Firefox 46 720p 4420
Google Chrome 51 720p 4420

The fact that Microsoft Edge received 1080p content in our power test means it actually ran a somewhat higher power draw than it otherwise would have playing 720p content like the other browsers.  Microsoft Edge provided the highest quality content and also delivered the longest battery life.

Content owners that stream premium video on the web need to make choices that balance providing the best quality possible, while also ensuring that the content is protected.  As quality increases, so does the need for strong Digital Rights Management (DRM), systems that protect media streams so that they can only be played by users authorized by the streaming service.  This is important now as companies make decisions to stream 1080p, and will become even more important as video resolutions increase. Content owners will not stream premium content if it can be easily saved and shared outside the service.

Microsoft Edge was built to take advantage of platform features in Windows 10.  It is optimized to use PlayReady Content Protection and the media engine’s Protected Media Path, whereas Chrome and Opera implement Widevine, and Firefox implements both Adobe Access and Widevine.  Like video decode efficiency, content protection in the platform and closer to the hardware can offer superior performance.  Likewise, the better the content protection, the better the video quality the service is likely to provide to that browser.

What Does the Future Hold?

Displays have already moved beyond Full HD/1080p to UltraHD/4K video.  Upcoming improvements to audio and video quality go beyond just more audio channels and more pixels.  We will see an increase in the number and intensity of colors displayed (color gamut), higher frame rate, high dynamic range (HDR) and broad availability of immersive, spatial or 3-D audio. In the Alliance for Open Media, Microsoft and other leading internet companies are developing next-generation media formats, codecs and other technologies for UltraHD video.

Microsoft is working with industry leading graphics chipset companies to expand support for hardware acceleration of new higher quality content.  We are also working with chipset companies to support Enhanced Content Protection that moves the Protected Media Path into peripheral hardware for an even higher level of security.  The code running in the peripheral is isolated from the main OS, which provides an increased level of protection, and will likely enable content owners to stream 4k or higher resolution video with confidence.  We call the improved security from hardware based DRM “Security Level 3000”.

Chipsets that support hardware based DRM have been shipping in newer PCs.  The full support system awaits completion of software components that are coming in Fall of 2016.  The Windows 10 platform has significant power and security advantages for media playback available to any application.  Paired with the media features built into Windows 10, Microsoft Edge has best-in-class battery life and video playback quality on Windows 10 devices, including PCs, Mobile, Xbox, HoloLens, and other devices.

Let us know what you think over at @MSEdgeDev!

  • Jatinder Mann, Senior Program Manager Lead, Microsoft Edge
  • Jerry Smith, Senior Program Manager, Microsoft Edge
  • John Simmons, Principal Program Manager, Windows Media

Managing Microsoft Edge in the enterprise

In this Microsoft Edge blog post, we discuss Microsoft Edge manageability options for Enterprise customers:

At last year’s Microsoft Ignite conference, we introduced the enterprise story for the web on Windows 10. Microsoft Edge is designed from the ground up to provide a modern, interoperable, and secure browsing experience; in addition, Internet Explorer 11 is also a part of Windows 10 to help bring all your legacy line of business (LOB) applications forward.

Microsoft Edge and Internet Explorer 11 work together to help ease IT management overhead, and also provide a seamless user experience for your users. In this post, we’ll walk through the policies you can use to manage Microsoft Edge in the enterprise for both PCs and mobile devices, including some new policies coming in the Windows 10 Anniversary Update.

Policies currently supported in Microsoft Edge

With Microsoft Edge, we set out to provide a simple, consistent set of scenario-driven management policies to help manage Windows 10 browser deployments on both desktop and mobile. The policies for Microsoft Edge on desktop are available as both Group Policy settings and MDM settings. On mobile they are available as MDM settings.

Here is a summary of all the policies supported by Microsoft Edge grouped by Windows 10 releases:

  • Available in Windows 10 version 1507 or later:
    • Configure Autofill
    • Configure Cookies
    • Configure Do Not Track
    • Configure Password Manager
    • Configure Pop up Blocker
    • Configure search suggestions in the Address bar
    • Configure the Enterprise Mode Site List
    • Configure the SmartScreen Filter
    • Send all intranet sites to Internet Explorer 11
  • Available in Windows 10 version 1511 or later:
    • Allow Developer Tools
    • Allow InPrivate browsing
    • Allow web content on New Tab page
    • Configure Favorites
    • Configure Home pages (see additional note below)
    • Prevent bypassing SmartScreen prompts for files
    • Prevent bypassing SmartScreen prompts for sites
    • Prevent sharing LocalHost IP address for WebRTC

What’s new in Windows 10 Anniversary update

We have added support for the following new Microsoft Edge management policies as a part of the Windows 10 Anniversary Update:

  • Allow access to the about:flags page
  • Allow usage of extensions
  • Show a transitional message when opening Internet Explorer sites

We’ve made a few updates to existing policies based on feedback from customers.  First, all of the Microsoft Edge Group Policy settings on desktop are now available in both the User and Machine policy hives. Second, the home page policy configured on a domain-joined device will no longer allow the user to override the setting.

You can find further details on all Microsoft Edge policies on TechNet, including info about Windows 10 policies that also apply to Microsoft Edge, such as Cortana and Sync settings. Your feedback is important to us, so please let us know what you think or if you have any questions about these changes!

– Dalen Abraham, Principal Program Manager Lead

– Jatinder Mann, Senior Program Manager Lead

– Josh Rennert, Program Manager

A world without passwords: Windows Hello in Microsoft Edge

In this Microsoft Edge blog post, my team and I discuss bringing biometric authentication to the web:

At Build 2016, we announced that Microsoft Edge is the first browser to natively support Windows Hello as a more personal, seamless, and secure way to authenticate on the web. This experience is powered by an early implementation of the Web Authentication (formerly FIDO 2.0) specification, and we are working closely with industry leaders in both the FIDO Alliance and W3C Web Authentication working group to standardize these APIs. Try out this Test Drive demo in Microsoft Edge on recent Windows Insider builds to experience Windows Hello on the web today!


Passwords can be a hassle. Most people don’t create strong passwords or make sure to maintain a different one for every site. People create easy-to-remember passwords and typically use the same passwords across all of their accounts. Surprisingly – and if it’s not surprising to you, you may want to change your password – passwords like “123456” and “password” are very common. Malicious actors can use social engineering, phishing, or key logging techniques to steal passwords from your machine, or they can compromise the server where the passwords are stored. When the same password is used across several sites, compromising one account can expose many others to abuse.

We look forward to a web where the user doesn’t need to remember a password, and the server doesn’t need to store a password in order to authenticate that user. Windows Hello, combined with Web Authentication, enables this vision with biometrics and asymmetric cryptography. In order to authenticate a user, the server sends down a plain text challenge to the browser. Once Microsoft Edge is able to verify the user through Windows Hello, the system will sign the challenge with a private key previously provisioned for this user and send the signature back to the server. If the server can validate the signature using the public key it has for that user and verify the challenge is correct, it can authenticate the user securely.

Screen Capture showing Windows Hello prompt to log in to a web page

These keys are not only stronger credentials – they also can’t be guessed and can’t be re-used across origins. The public key is meaningless on its own and the private key is never shared. Not only is using Windows Hello a delightful user experience, it’s also more secure by preventing password guessing, phishing, and keylogging, and it’s resilient to server database attacks.

Web Authentication: Passwordless and Two Factor Authentication

We’ve been working at the FIDO Alliance with organizations from across the industry to enable strong credentials and help move the web off of passwords. The main goal of the FIDO Alliance is to standardize these interfaces, so websites can use Windows Hello and other biometric devices across browsers. The FIDO Alliance had recently submitted the FIDO 2.0 proposal to the W3C and the newly formed Web Authentication working group is standardizing these APIs in the W3C Web Authentication specification.

FIDO Alliance logo

The Web Authentication specification defines two authentication scenarios: passwordless and two factor. In the passwordless case, the user does not need to log into the web page using a user name or password – they can login solely using Windows Hello. In the two factor case, the user logs in normally using a username and password, but Windows Hello is used as a second factor check to make the overall authentication stronger.

In traditional password authentication, a user creates a password and tells the server, which stores a hash of this password. The user, or an attacker who obtains the password, can then use the same password from any machine to authenticate to the server. Web Authentication instead uses asymmetric key authentication. In asymmetric key authentication, the user’s computer creates a strong cryptographic key pair, consisting of a private key and a public key. The public key is provided to the server, while the private key can be held by the computer in dedicated hardware such as a TPM, so that it cannot be moved from that computer. This protects the users against attacks on both the client and the server – client attacks cannot be used to let an attacker authenticate from elsewhere, and server attacks will only give the attacker a list of public keys.

Microsoft Edge supports an early implementation of the Web Authentication spec – in fact, the latest editor’s draft has already been updated beyond our implementation and we expect the spec to continue to change as it goes through the standardization process. We have implemented our APIs with the ms-prefix to indicate that these APIs are very likely to change in the future. We’ll continue to update the APIs in future releases as the standard finalizes – so be sure to watch for changes.

The Web Authentication API is very simple – it supports two methods: window.webauthn.makeCredential and window.webauthn.getAssertion. You will need to make both server and client sides changes to enable Web Auth authentication in your web application. Let’s talk through how to use these methods.

Registering the user

To use Web Auth, you, the identity provider, will first need to create a Web Auth credential for your user using the window.webauthn.makeCredential method.

When you use the makeCredential method, Microsoft Edge will first ask Windows Hello to use face or fingerprint identification to verify that the user is the same user as the one logged into the Windows account. Once this step is completed, Microsoft Passport will generate a public/private key pair and store the private key in the Trusted Platform Module (TPM), the dedicated crypto processor hardware used to store credentials. If the user doesn’t have a TPM enabled device, these keys will be stored in software. These credentials are created per origin, per Windows account, and will not be roamed because they are tied to the device. This means that you’ll need to make sure the user registers to use Windows Hello for every device they use. This makes the credentials even stronger – they can only be used by a particular user on a particular origin on a particular device.

Before registering the credential to a user on your server, you will need to confirm the identity of the user. This can be done by sending the user an email confirmation or asking them to use their traditional login method.

The below code sample shows how you would use the makeCredential API. When you call the makeCredential API, you will need to supply as parameters data structures containing the user account information, crypto parameters, and an attestation challenge. The user account information contains information on the user’s name, profile image, site in which the user is logging into, and user identifier information on the site. We’ll cover later how this information is used. The crypto parameter is a data structure that contains the crypto algorithm you which to use. The attestation challenge is used by the authenticator to produce an attestation statement, which tells the server what security measures the authenticator implements for its credentials. There are a number of other optional parameters, which we’ll ignore here. The methods are all implemented as promises. When the promise returns, it will include an object that contains information on the credential ID, public key, crypto algorithm, and the attestation challenge. The credential ID will be used to identify the public and private key pairs. You will then send this information back to the server for validating future authentications.

function makeCredential() {
  try {
    var accountInfo = {
      rpDisplayName: 'Contoso',  // Name of relying party
      displayName: 'John Doe',  // Name of user account in relying partying
      name: 'johndoe@contoso.com',// Detailed name of account
      id: 'joed',                 // Account identifier
      imageUri: imgUserProfile,  // user’s account image
    var cryptoParameters = [
        type: 'ScopedCred',
        algorithm: 'RSASSA-PKCS1-v1_5'
    var timeout = { };
    var denyList = { };
    var ext = { };
    var attestationChallenge = getChallengeFromServer();
      .then(function (creds) {
       // If promised succeeds, send credential information to the server
      .catch(function(reason) {
        // User may have cancelled the Windows Hello dialog
  } catch(ex) {
     // The user may not have setup Windows Hello, show instructions

The Microsoft Edge implementation is ms-prefixed, so you’ll need to call window.msCredentials.makeCredential instead of window.webauthn.makeCredential. The Microsoft Edge implementation is also based on an earlier draft of the specification, so there are a number of other differences in implementation as well, like the credential type is “FIDO_2_0” instead of “ScopedCred”, we don’t yet implement the optional timeout, denylist, or ext parameters, or require the attestation challenge to make the credential. In Microsoft Edge, you would instead make this call using the following code:

    var accountInfo = {
      rpDisplayName: 'Contoso',        // Name of relying party
      userDisplayName: 'John Doe'      // Name of user account in relying partying
    var cryptoParameters = [
        type: 'FIDO_2_0',
        algorithm: 'RSASSA-PKCS1-v1_5'
    window.msCredential.makeCredential(accountInfo, cryptoParameters)
      .then(function (cred) {
       // If promised succeeds, send credential information to the server
                   credential: {type: 'ScopedCred', id: cred.id},
                   algorithm: cred.algorithm,
                   publicKey: JSON.parse(cred.publicKey),
                   attestation: cred.attestation

Authenticating the user

Once the credential is created on the client, the next time the user attempts to log into the site, you can offer to sign them in using Windows Hello instead of a password. You will authenticate the user using the window.webauthn.getAssertion call.

The getAssertion call has a number of optional parameters, but the only required parameter is the challenge. This is the challenge that the server will send down to the client. This challenge is a random quantity generated by the server. Since the challenge is not predictable by an attacker, the server can be assured that any assertions it receives were freshly generated in response to this challenge and are not replays of earlier assertions. The allowList parameter also takes an optional list of credential ID information to locate the correct private key. This information is useful if you’re doing two factor auth and you can share the id from the server, where it is stored. In the passwordless case, you don’t want to share the id from the server because the user hasn’t yet authenticated.

If the user has multiple credentials for an origin in the TPM, the browser will show a user experience allowing the user to pick the account they meant to use (assuming they have multiple accounts with an application, and you don’t provide a credential ID). This is why we collect the user profile picture and account name upon registration.

Once the getAssertion call is made, Microsoft Edge will show the Windows Hello prompt, which will verify the identity of the user using biometrics. After the user is verified, the challenge will be signed within the TPM and the promise will return with an assertion object that contains the signature and other metadata. You will then send this data to the server. We’ll check the server code next to see how you verify the challenge is the same that you had sent.

function getAssertion() {
  try {
    var challenge = getChallengeFromServer(); 
    var allowList = 
              type: 'ScopedCred',
              id:  getCredentialID()
    var timeout = { };
    var ext = { };
    window.webauthn.getAssertion(challenge, timeout, allowList, ext)
      .then(function(assertion) {
         // Send signed challenge and meta data to server
      }, function (e) {
        // No credential in the store. Fallback to password
  } catch (ex) {
    // Log failure

In Microsoft Edge, you will need to call window.msCredentials.getAssertion instead of window.webauthn.getAssertion. The Microsoft Edge implementation also requires the credential ID and we don’t yet support the account picker experience. A side effect of this is that for the passwordless case, you’ll need to store your credential ID information in local storage on the client, either in indexDB or localStorage when making your credential. This mean that if a user deletes their browsing history, including local storage, they will need to re-register to use Windows Hello the next time they log in. We will very likely fix this issue in a future release.

Here’s how you would make the getAssertion call in Microsoft Edge. Note how the accept object is required for the filter parameter.

var filters = {
              type: 'FIDO_2_0',
              id:  getCredentialIDFromLocalStorage()
window.msCrendentials.getAssertion(challenge, filters)
.then(function(attestation) {
    // Send signed challenge and meta data to server
                   credential: {type: 'ScopedCred', id: attestation.id},
                   clientData: attestation.signature.clientData,
                   authnrData: attestation.signature.authnrData,
                   signature: attestation.signature.signature

Server side authentication

Once you receive the assertion on the server, you will need to validate the signature. The below Node.JS code shows how you would validate the signature to authenticate the user on the server. We also have the same code available in C# and PHP.

var jwkToPem = require('jwk-to-pem')
var crypto = require('crypto');
var webAuthAuthenticator = {
   validateSignature: function (publicKey, clientData, authnrData, signature, challenge) {
       // Make sure the challenge in the client data 
       // matches the expected challenge
       var c = new Buffer(clientData, 'base64');
       var cc = JSON.parse(c.toString().replace(/\0/g,''));
       if(cc.challenge != challenge) return false;
       // Hash data with sha256
       const hash = crypto.createHash('sha256');
       var h = hash.digest();
       // Verify signature is correct for authnrData + hash
       var verify = crypto.createVerify('RSA-SHA256');
       verify.update(new Buffer(authnrData,'base64'));
       return verify.verify(jwkToPem(JSON.parse(publicKey)), signature, 'base64');

Evolving Web Authentication standard and Microsoft Edge implementation

As mentioned above, Microsoft Edge has an early implementation of Web Authentication and there are a number of differences between our implementation and the April 2016 spec.

  • Microsoft Edge APIs are ms- prefixed
  • Microsoft Edge does not yet support external credentials like USB keys or Bluetooth devices. The current API is limited to embedded credentials stored in the TPM.
  • The currently logged in Windows user account must be configured to support at least a PIN, preferably face or fingerprint biometrics. This is to ensure that we can authenticate the access to the TPM.
  • We do not support all of the options in the current Web Auth spec draft, like extensions or timeouts.
  • As mentioned earlier, our implementation requires that the list of acceptable credential IDs be included in every getAssertion call.

The specification is also going through the W3C standardization process and we expect a number of changes in the specification, like the object name recently changing from window.fido to window.webauthn in the latest’s editor’s draft.

We have a number of resources available to help you prototype and experiment with these APIs:

    • Webauthn.js polyfill. Using this polyfill, you can code to the standard instead of our early implementation. We’ll update this polyfill for every major published version of the specification.
    • Windows Hello in Microsoft Edge test drive sample. This test drive sample shows you the typical client side registration and assertion flow.
    • Server and client side WebAuth This sample code shows the end to end client and server side flow for registration and assertion.
    • C#, PHP, and JS server side sample. These code samples show how could implement your server side logic in a number of language options.
    • Web Authentication MSDN documentation and dev guide.
    • Edge Summit talk on Windows Hello in Microsoft Edge.

Call to Action

We’re excited to support Windows Hello and Web Authentication in Microsoft Edge and innovate in the biometric authentication and passwordless space in the web. Now is a great time to start prototyping and experimenting with these APIs and sharing your feedback with us in the comments below or on Twitter. We look forward to hearing from you!

Rob Trace, Program Manager, Microsoft Edge
Jatinder Mann, Program Manager Lead, Microsoft Edge
Vijay Bharadwaj, Software Engineering Lead, Security
Anoosh Saboori, Program Manager, Security