Web Performance: When millisecond resolution just isn’t enough

I discuss when milliseconds just aren’t enough in this IE Blog article:

Sometimes measuring time in millisecond resolution just isn’t accurate enough. Together with industry and community leaders, the W3C Web Performance working group has worked to solve this problem by standardizing the High Resolution Time specification. As of this week, this specification has been published as a Proposed Recommendation (PR) and is widely adopted in modern browsers. Take a look at the What Time is it? test drive demo to see how this API works.

This specification has gone from just an idea to PR in eight short months. The PR stage of standardization is the final step before a Web standard becomes an official W3C Recommendation. Additionally, this interface has been broadly adopted in browsers, including full support in Internet Explorer 10 and Firefox 15, and supported with a prefix in Chrome 22. This is a great example of what’s possible when the industry and community come together through the W3C.

So why aren’t milliseconds good enough? Time has long been measured in the Web platform using some form of the JavaScript Date object, whether it is through the Date.now() method or the DOMTimeStamp type. The Date object represents a time value as time in milliseconds since 01 January, 1970 UTC. For most practical purposes, this definition of time has been sufficient to represent any instant to within 285,616 years from 01 January, 1970 UTC.

For example, at the time of writing this blog, my Date.now() time value from my IE10 Developer Tools Console was 1350509874902. This thirteen digit number represents the number of milliseconds from the origin of this time base, 01 January, 1970. That time corresponds to 17 Oct 2012 21:37:54 UTC.

Though this definition will continue to be genuinely useful for determining the current calendar time, there are some cases where this definition is not sufficient. For example, it is useful for developers to determine if their animation is running smoothly at 60 frames per second (one frame painted every 16.667 milliseconds). Using the simple method of calculating the instantaneous FPS by measuring when the frame drawing callback was last made, one can only determine FPS to 58.8 FPS (1/17) or 62.5 FPS (1/16).

Similarly, sub-millisecond resolution is also desirable when accurately measuring elapsed time (e.g., using the Navigation Timing, Resource Timing and User Timing APIs to instrument your network and script timing) or when attempting to synchronize animation scenes or audio with animation.

To solve this issue, the High Resolution Time specification defines a new time base with at least microsecond resolution (one thousandth of a millisecond). To reduce the number of bits used to represent this number and to increase readability, instead of measuring time from 01 January, 1970 UTC, this new time base measures time from the beginning of navigation of the document, performance.timing.navigationStart.

The specification defines performance.now() as the analogous method to Date.now() for determining the current time in high resolution. The DOMHighResTimeStamp is the analogous type to DOMTimeStamp that defines the high resolution time value.

For example, looking at the current time using performance.now() and Date.now() in the IE10 Developer Tools Console at the time of writing this blog, I see the following two values:

performance.now():           196.304879519774
Date.now():        1350509874902

Even though both of these time values represent the same instance in time, they are being measured from a different origin. The performance.now() time value definitely feels more readable.

As High Resolution Time is measured from the start of a document’s navigation, performance.now() in a sub-document will be measured from the start of navigation of the sub-document, not the root document. For example, suppose a document has same-origin iframe A and cross-origin iframe B, where a navigation occurred in iframe A about 5 milliseconds after the start of navigation of the root document and in iframe B about 10 milliseconds after the start of navigation of the root document. If we measure the exact moment of time 15 milliseconds after the start of navigation of the root document, we would get the following values for performance.now() calls in the different contexts:

performance.now() in iframe B:                                5.123 ms

performance.now() in iframe A:                               10.123 ms

performance.now() in root document:                    15.123 ms

Date.now() in any context:                         134639846051 ms

Figure: Date.now() is time measured since 01 January 1970, whereas performance.now() is time measured since the start of the document navigation

This design not only ensures there is no data leakage on the time of creation of the parent in cross-origin iframes, it also allows you to measure time relative to your start. For example, if you were using the Resource Timing interface (which uses High Resolution Time) to determine how long it takes for a server to respond to a resource request in a sub-document, you wouldn’t need to make adjustments to take the time of adding the sub-document to the root document into account.

If you wish to do cross frame time comparisons, you would just need to request top.performance.now() to get a time value relative to the root document’s start of navigation, which would return the same value in all same-origin iframes.

Another significant benefit of this API over Date.now() is that performance.now() is monotonically increasing and not subject to clock skew or adjustment. The difference between subsequent calls to performance.now() will never be negative. Date.now() doesn’t have such a guarantee and in practice we have heard of reported cases where negatives time have been seen in analytics data.

High Resolution Time is another great example of how quickly new ideas can become interoperable standards that developers can depend on in modern HTML5-enabled browsers. Thanks to everyone in the W3C Web Performance Working Group for helping design this API and to other browser vendors for rapidly implementing it with an eye towards interoperability.

Thanks,
Jatinder Mann
Internet Explorer Program Manager

Advertisement

W3C Web Performance Workshop

In this IE Blog article, I discuss the W3C Web Performance Workshop:

The W3C Web Performance Working Group is looking for new use cases and performance issues to solve in its next chartered period. To that effect, the Working Group is holding a public workshop on November 8, 2012, in Mountain View, CA where performance experts and Web developers are invited to present ideas and discuss current challenges. Statements of interest will be the basis of the discussion at the Workshop and must be submitted by October 29th to this mailing list.

Fast HTML5 Web applications benefit consumers who browse the Web and developers building innovative new experiences. Just over two years ago, the W3C announced the formation of the Web Performance Working Group chartered with two goals: making it easier to measure and understand the performance characteristics of Web applications and defining interoperable methods to write more CPU- and power-efficient applications.

Over the course of these two years, together with Google, Mozilla, Facebook, and other industry and community leaders who participate in the W3C Web Performance Working Group, the working group has designed and standardized eight interfaces that are now widely adopted in modern HTML5-enabled browsers: Navigation Timing, Resource Timing, User Timing, Performance Timeline, Page Visibility, Timing control for script-based animations, High Resolution Time and Efficient Script Yielding. These APIs are supported in Internet Explorer, Firefox, and Chrome, and are great examples of how quickly new ideas can become interoperable standards that developers can depend.

If you are interested in sharing feedback to the Working Group and cannot attend the workshop, you can do so by completing this survey. The deadline for the survey is November 2nd.

Thanks, — Jatinder Mann, Program Manager, Internet Explorer

IEBlog: Web Performance APIs Rapidly Become W3C Candidate Recommendations

In this IE Blog article, I discuss how the Web Performance APIs are rapidly standaridizing.

We’re pleased to share that three new W3C Web Performance Working Group specifications moved to W3C Candidate Recommendations. Accurately measuring the performance characteristics of Web applications is critical to making the Web faster. In addition, developers need the ability to effectively use the underlying hardware to improve the performance of their applications. Over the last two years, companies including Microsoft, Google, Mozilla, Intel, and Facebook have been working toward these goals through the working group. This is a great example of what’s possible when the industry and community come together through the W3C.

The Navigation Timing, Resource Timing, User Timing and Performance Timeline specifications help developers accurately measure Web application performance. The first three specifications provide developers with information related to the navigation of the document, resources on the page, and developer scripts, respectively. The Performance Timeline specification defines a unifying interface to retrieve this timing data. Prior to these API’s it was not possible for developers to accurately measure their site performance.

To ensure these performance metrics are measured in the most accurate way possibly, the High Resolution Time specification defines a sub-millisecond clock resolution. This interface not only benefits accurate measurements of performance metrics, but also allows better frame rate calculations and synchronization of animations or audio cues. For the first time developers can measure operations with sub-millisecond accuracy.

The Page Visibility, Timing control for script-based animations, and Efficient Script Yielding specifications help developers write more CPU- and power-efficient Web applications. The Page Visibility API allows for programmatically determining the current visibility state of the page. Developers can use this data to make better CPU- and power-efficiency decisions, e.g., throttling down activity when the page is in the background tab. The requestAnimationFrame API, from the Timing control for script-based animations specification, allows for creating more efficient JavaScript animations. Finally, the setImmediate API, from the Efficient Script Yielding specification, allows developers to efficiently yield control flow to the user agent and receive an immediate callback, efficiently leveraging the CPU.

In order to ensure Web developers only have to write code once and have it work interoperably in all browsers, the Working Group has worked diligently these last two years to standardize these APIs. The table below shows the maturity level of all the specifications currently being edited in the Working Group.

Web Performance Spec Status 7_27_2012

Table showing the status of W3C Web Performance Specifications

As of this month, the Navigation Timing specification has been published as a Proposed Recommendation (PR). This stage of standardization is the final step before a Web standard becomes an official W3C Recommendation. Additionally, this interface has been broadly adopted in browsers, including support since Internet Explorer 9, Chrome 6 and Firefox 7. The working group recently started incorporating feedback and working on Navigation Timing 2, the next version of the specification.

As of this month, User Timing, Performance Timeline and Page Visibility specifications have been published as a Candidate Recommendation (CR). This stage of standardization is prior to the PR stage and reflects that the W3C believes this specification has been widely reviewed and satisfies the Working Group’s technical requirements. Resource Timing was published as CR just two months ago, along with High Resolution Time, which went from an Editor’s Draft to CR in just three months.

These APIs are a great example of how quickly new ideas can become interoperable standards that developers can depend on in modern HTML5-enabled browsers. Thanks to everyone in the W3C Web Performance Working Group for helping design these APIs and to other browser vendors for starting to implement these APIs with an eye towards interoperability.

—Jatinder Mann, Program Manager, IE Performance

Improving Web Performance: It’s All In The (Navigation) Timing

I gave the following interview to the Benchmark: The Mobile & Internet Performance Review.

An interview with Microsoft IE Program Manager Jatinder Mann

Users’ thirst for speed seems increasingly unquenchable. Even as they (barely) tolerate the sluggish performance of mobile devices, they demand more and more of their PCs. Make them wait one blink of an eye too long, and they are gone, taking the revenue they would generate with them. In 2010, The World Wide Web Consortium chartered a Web Performance Working Group to give developers client-side tools, in the browsers, to gain greater visibility into the timing of each aspect of page loading and help them see how they can make their pages faster. The first product of the working group is the Navigation Timing API, which Keynote is already leveraging to provide more granular site performance reporting and to provide operations managers and developers a common language to address site improvements. Microsoft IE Program Manager Jatinder Mann lives and breathes performance, both on the Internet Explorer team and on the W3C Web Performance Working Group, and is an expert on the Navigation Timing API. Benchmark recently caught up with him to get an overview of the Navigation Timing API and other initiatives and what they offer the Web community.

Benchmark: You work on the Internet Explorer team and focus on performance, and you’ve also been involved with the Navigation Timing API. Tell us what’s going on in your performance world.

Jatinder Mann: The Web Performance Working Group was chartered about a year and a half ago with two main goals – making it easier to measure and understand the performance characteristics of Web applications, and also to define interoperable methods to write more CPU-efficient applications.

So together with Google, Mozilla, Facebook, and other industry leaders that participate in this working group, we designed a few APIs – Navigation Timing, Resource Timing, User Timing, and Performance Timeline. And these APIs help developers accurately measure Web performance.

The first three specifications – Navigation Timing, Resource Timing, and User Timing – define interfaces for Web apps to access the timing information related to the navigation of the document; also resources on the page and developer scripts. Performance Timeline defines the unifying interface to retrieve the data.

Benchmark: The Navigation Timing API is actually available now. What is it, and why is it important for website owners to know about it?

Jatinder Mann: The short answer is, Navigation Timing allows site owners to accurately measure how fast their site is loading for their customers.

And this is not like an instrumented build where you are looking at the server side. With Navigation Timing what you’re getting is data from actual client machines.

So let’s say you are trying to improve page performance for a site which varies from location to location. You can look at machines in various locales and figure out what their actual load times are and understand the different causes of the performance problems.

Before this API there really wasn’t a way to accurately measure that without instrumenting builds and whatnot. So this is natively done within the browser and that makes it much more powerful.

Benchmark: You mentioned some other specs, Resource Timing, et cetera. Can you give us a quick overview of those?

Jatinder Mann: Navigation Timing was getting the time for the document navigation, but if you look at the timeline of activities that happen for a page to load, that’s just the first portion. There’s also all the resources within a page. Resource Timing tackles that problem.

None of the browser vendors have actually implemented anything yet, mainly because the spec isn’t yet at the stage of standardization. But both IE and Chrome are very interested in implementing it as soon as the spec is in a good state which I believe will happen soon.

Then there’s User Timing which focuses on developer scripts within a page. Again, if we look at the timeline of how a web page is loaded and the point at which the user can start interacting with the page, there’s a lot of script execution happening. User Timing is designed for developers to measure the time it takes to execute the scripts on the page.

And all of this information is included in what we call the Performance Timeline. It’s a unifying interface where you can obtain timing data for any of these different measurements – you can get your Navigation Timing, your Resource Timing and your Script Timing through one interface.

Benchmark: So what does this mean in practice? How will it change what developers do?

Jatinder Mann: We’re giving performance information on things that developers can change. You can look at the time spent in these various phases, for example, if your page is very slow to load, you can look at Navigation Timing to understand the culprit. If you see that the redirects are taking too long, you may choose to reduce the number of redirects you’re doing for a page. If the DNS look-up phase is very slow, you may want to use DNS proxies or caching. And if connection times are slow, you can determine which locales are slow and build CDNs closer to those customers.

So a lot of the timing information here is actionable information – you get timing information about why your page is slow and there’re actions you can take to improve that.

Benchmark: A web page goes through a long series of steps and there are performance implications in every one of them.

Jatinder Mann: Yes. The advantage here is that now you can see the breakdown of where the time is being spent. As I mentioned earlier, if your redirects are taking too long, as an owner of a website I may choose to reduce the number of redirects, or be careful of what server I’m redirecting to. Maybe that server’s slow.

Benchmark: So the goal of all this is to make the experience faster for the users. That’s obviously a focus of the IE team and of the working group. What is Microsoft’s perspective on speed? Have you identified any critical thresholds that guide what you do?

Jatinder Mann: For many releases now we’ve made significant product changes to IE to make sure it’s fast and fluid for end users. For example, hardware-accelerated HTML5 implementation, and reworking our layout and JavaScript engines from scratch. I don’t know if it’s easy to characterize a threshold, because we see that the Web consists of many different patterns.

Within our IE performance lab we test many different pages, representative of patterns on the Web, and try to improve those patterns.

For example you can have five different news sites and they may all be using different patterns. So it’s not as easy as saying ‘improve JavaScript and all Web pages will be faster’ because some sites may be JavaScript heavy, some may be layout heavy.

Benchmark: What is your sense of the how the developer community and the site community are embracing this API? Or is it still too early to tell?

Jatinder Mann: Keynote, for example, has already started performance monitoring using this API. Facebook had a lot of requests for this API while we were building it. So I’m certain at least for Resource Timing, once that’s available, they may be interested as well. And there are many Microsoft teams that are looking at this API.

So there’s definitely some interest already through people that are in the business of performance improvement and I would love to see more site owners start using these tools to improve their site performance.

Benchmark: What’s next for the working group?

Jatinder Mann: My expectation is that Resource Timing, User Timing and Performance Timeline will probably be the next ones that the browser vendors will want to implement, mainly because it would complete the story with Navigation Timing and you would get the complete Performance Timeline view.

That’s something I think would be very interesting, coming up very soon.

Benchmark: Very soon as in 2012?

Jatinder Mann: It’s hard to say. But I would say, yes, very soon. Chrome, and IE, and Mozilla are all active in this working group and I think we all appreciate that Navigation Timing is out there and it’d be nice to get the next set of things out there too.

Benchmark: So to close, what is the most important thing to communicate about the new Navigation Timing API and its use?

Jatinder Mann: The Navigation Timing API is a new way to get information about what’s happening in the browser that wasn’t available before or was difficult to get. This API makes it easier for developers to identify ways for improving Web performance.

Internet Explorer Performance Lab: reliably measuring browser performance

Matt, Jason and I wrote this article on the Building Windows 8 engineer blog on how the Internet Explorer team measures web browing performance. PC Magazine discusses the article as well. Enjoy!

A big part of this blog is going behind the scenes to show you all the work that goes into the engineering of Windows 8.  In this post we take a look at something we all care very deeply about–as engineers and as end-users–real world web performance. We do a huge amount of work to get beyond the basics of anecdotes and feel as we work to build high performance web browsing.  This post is authored by Matt Kotsenas, Jatinder Mann, and Jason Weber on the IE team, though performance is something that every single member of the team works on.
–Steven Sinofsky, President of Windows and Windows Live.

Web performance matters to everyone, and one engineering objective for Internet Explorer is to be the world’s fastest browser. To achieve this goal we need to reliably measure browser performance against the real world scenarios that matter to our customers.

Over the last five years we designed and built the Internet Explorer Performance Lab, one of the world’s most sophisticated web performance measurement systems. The IE Performance Lab collects reliable, accurate, and actionable data to inform decisions throughout the development cycle. We measure the performance of Internet Explorer 200 times daily, collecting over 5.7 million measurements and 480GB of runtime data each day. We understand the impact of every change to the product and ensure that Internet Explorer only gets faster. This blog post takes a deep look at how the IE Performance Lab is designed and how we use the lab to ensure we’re continually making the web faster.

In this post, we present:

  • Overview of the IE Performance Lab
  • Lab infrastructure
  • What (and how) we measure
  • Testing a scenario
  • Results investigation
  • Testing third-party software
  • Building a fast browser for users

Overview of the IE Performance Lab

In order to reliably measure web performance over time, a system needs to be able to reproducibly simulate real world user scenarios. In essence, our system needs to create a “mini version of the Internet.”

The IE Performance Lab is a private network completely sealed from both the public Internet and the Microsoft intranet network, and contains over 140 machines. The lab contains the key pieces of the real Internet, including web servers, DNS servers, routers, and network emulators, which simulate different customer connectivity scenarios.

Although this may appear complex at first glance, this approach allows all sources of variance to be removed. By controlling every aspect of the network, down to individual packet hops and latencies, our tests become deterministic and repeatable, which is critical to making the results actionable. In the IE Performance Lab, activity is measured with 100 nanosecond resolution.

Diagram shows content servers connected to Network emulators, connected to DNS servers, connected to Test clients, connected to Raw data storage, connected to Data analysis, connected to SQL database.

This type of network configuration also provides a great amount of flexibility. Because we’re simulating a real world setup, our lab can accommodate nearly any type of test machine or website content. The IE Performance Lab supports desktops, laptops, netbooks, and tablets with x86, x64, and ARM processors, all simultaneously.

Similarly, because the lab uses the Windows Performance Tools (WPT), we can run the same tests using different web browsers, toolbars, anti-virus products, or other third-party software and directly compare the results. WPT provides deep insight into the underlying hardware. Using WPT, we can capture everything from high-level CPU and GPU activity, to low-level information such as cache efficiency, networking statistics, memory usage patterns, and more. WPT allows us to measure and optimize performance across the stack to ensure that the hardware, device drivers, Windows operating system, and Internet Explorer are all efficiently optimized together.

A single test run takes 6 hours to complete and generates over 22GB of data during that time. This highly automated system is staffed by a small team that monitors operations, analyzes results, and develops new infrastructure features.

Lab infrastructure

The Performance Lab infrastructure can be broken into three main categories: Network and Server, Test Clients, and Analysis and Reporting. Each category is designed to minimize interaction across components, both to improve scalability of testing and to reduce the possibility of introducing noise into the lab environment.

A large room full of computers

Here’s a view of the IE Performance Lab, including a number of test and analysis machines on our private network.

Network and server infrastructure

Let’s start by discussing the DNS servers, network emulators, and content servers; all the components that together create the mini Internet. Over the next three sections we’ll work our way from right to left in the architectural diagram.

Content servers

Content servers are web servers that stand in for the millions of web hosts on the Internet. Each content server hosts real world web pages that have been captured locally. The captured pages go through a process we refer to as sanitization, where we tweak portions of the web content to ensure reproducible determinism. For example, JavaScript Date functions or Math.Random() calls will be replaced with a static value. Additionally, the dynamic URLs created by ad frameworks are locked to the URL that was first used by the framework.

After sanitization, content is served similarly to static content through an ISAPI filter that maps a hash of the URL to the content, allowing instantaneous lookup. Each web server is a 16-core machine with 16GB of RAM to minimize variability and ensure that content is in memory (no disk access required).

Content servers can also host dynamic web apps like Outlook Web Access or Office Web Apps. In these cases, the application server and any multi-tier dependencies are hosted on dedicated servers in the lab, just like real world environments.

Network emulators

Since many sources of variability have been removed, network speeds no longer reflect the experiences of many users with slower connections. To simulate real world customer environments, a test can take advantage of network emulation to understand the performance across the wide range of networks in use today. The lab supports emulating several DSL configurations, cable modems, 56k modems, as well as high-bandwidth, high-latency environments like WAN and 4G environments. As HTTP requests are passed to the emulator, it simulates network characteristics like packet delay and reordering, then forwards the request on to the web hosts. Upon receiving a response, emulation is again applied and then passed back to the test client.

Using dedicated hardware for network emulation provides the most realistic testing environment possible, and significantly reduces the observer effect. Although dedicated hardware adds cost and complexity compared to proxy or software-based solutions, it’s the only way to accurately measure performance. Browsers limit the number of simultaneous proxy connections to prevent proxy saturation, so using a proxy for network emulation has the unintended effect of sidestepping domain sharding and other optimizations made by the webpage. Additionally, local network emulation will compete with the browser for local machine resources, especially on low power machines.

DNS servers

Like real world DNS servers, the lab’s DNS servers link the content servers to the test clients. The lab also uses a different DNS server for each network emulator, meaning that changing from one network speed to another is as simple as changing the DNS server. In these cases, instead of resolving domain names to the web hosts, the DNS server resolves all domain names to the associated network emulator.

Test client configurations

We want to ensure that Internet Explorer consistently runs fast across all classes of computer hardware. The lab contains over 120 computers used to measure Internet Explorer performance. We refer to these as test clients; they range from high-end x64 desktops, to low-powered netbooks, to touch-first tablet devices, and everything in between. Because repeatability of measurements is paramount, all test clients are physical machines.

A long desk and two shelves, each containing 12 or more computers

Internet Explorer Performance Lab change comparison machine pool

Different machine classes contain both discrete and integrated graphics platforms to ensure that Internet Explorer continues to take full advantage of hardware acceleration across the ecosystem of devices. Above is our main machine pool. These PCs approximate the average consumer experience over the lifetime of a Windows 7 or Windows 8 PC.

Machines are ordered from the OEM to be identical; they all come from the same manufacturing lot and their performance characteristics are verified prior to use. Since the lab runs 24/7, hardware failures are inevitable. Replacing failed components with identical parts from a different manufacturing lot almost always results in the repaired computer running faster than the other machines in the pool. While this difference would be unnoticeable in the real world, when you’re measuring down to 100 nanoseconds, even a few cycles can impact the results! If after a repair a machine no longer runs identically to the rest of the pool, it is removed from the lab and the pool’s size permanently shrinks. In response, the lab’s purchases include extra “buffer” machines, so that when a failed machine is removed from the pool, the excess capacity provides a cushion, and the lab’s operations are not affected.

To add hardware breadth, we have additional machine pools that run the spectrum of consumer scenarios. Good performance on these machines ensures that IE uses the underlying hardware effectively across the PC ecosystem.


Assortment of laptop and desktop PCs on two shelves

Low-powered test machines. Each one is in a different state of testing.

If even more diversity is needed, the IE Performance Lab can also make use of the Windows Graphics Lab. The Windows Graphics Lab stocks nearly every graphics chipset manufactured. PCs can be configured into nearly any permutation imaginable and then used for performance testing. The Windows Graphics Lab is invaluable for diagnosing graphics problems across chipsets and driver revisions.

Analysis and reporting servers

Collection and analysis of test results are divided into two separate steps. By offloading analysis to dedicated machines, the test clients can begin another performance run earlier, and more powerful server class machines can be used to perform the analysis more rapidly. The sooner the analysis completes, the more efficiently we can identify performance changes.

For analysis, we use 11 server class machines, each of which has 16 cores and 16GB of RAM. During analysis, each trace file is inspected and thousands of metrics are extracted and inserted into a SQL server. Over the course of 24 hours these analysis machines will inspect over 15,000 traces that will be used for trend analysis.

Two server racks

Pictured are two of several server racks which contain file servers, a SQL server, and several analysis and content servers.

The SQL Server used to store the nearly 6 million measurements we collect each day is a 24 logical core machine with 64GB of RAM. Reports can be generated directly from SQL, or results can be inspected using either an HTML-based comparison application or WCF service that provides results in XML or JSON formats.

What (and how) we measure

With the infrastructure in place, let’s review the different types of scenarios measured in the Performance Lab, and the tools we use to gather metrics.

Scenarios measured daily

The Performance Lab focuses on real world scenarios that matter to users. As a result, we run over 20,000 different tests daily. These tests fall into four, sometimes overlapping, categories:

4 overlapping circles: Loading Content, Interactive Web Apps, IE "The Application", Synthetic Platform Benchmarks

Loading content – Navigating from one page to another is still the

most common activity inside a web browser. Loading web content is also the only

category that touches most of the browser’s eleven subsystems. Loading web content is a prerequisite

for the other categories of scenarios.

Interactive web apps – This category covers what is sometimes referred

to as content creation, AJAX applications, or Web 2.0 sites. It includes interacting

with popular news and social networking sites as well as interacting with mail and document applications like Outlook Web Access and Office Web Apps.

IE “the application” – Important but often forgotten are scenarios that interact with the browser itself. Common interactions include opening or closing the browser, switching tabs, using browser features like History and Favorites, and panning and zooming with both keyboard and mouse, and touch inputs.

Synthetic benchmarks – Rarely forgotten but often overstated are synthetic benchmarks like WebKit SunSpider. Benchmarks can be a useful engineering tool as they are designed to stress individual browser subsystems and accentuate differences between browsers. However, in order to maximize those differences, benchmarks often resort to atypical usage patterns or edge cases.

Real world patterns

When measuring performance it is important to ensure that the tests reflect real world usage patterns. Most Software Engineering textbooks refer to this process as workload

modeling, or application usage modeling. To ensure that the Performance Lab measures real world patterns, the Performance Lab uses real Web pages that represent real world patterns and exercise different browser subsystems.

In order to determine which sites to use for testing, we regularly crawl millions of sites and compile a list of site attributes and coding patterns. We use 68 different data points to determine commonalities across sites – things like the depth and width of the resulting DOM, CSS layout patterns, common frameworks used, international features, and more. From the results we chose sites that best represent the common patterns and diversity of the broader Web.

Engineering metrics

Performance is a multi-dimensional problem. The only way to get an accurate view of performance is to understand the scenario you’re testing, and how the hardware and OS interact with the browser. Here’s a closer look at five important performance metrics in the context of loading a major sports site for the first time.

Chart comparing Display time, elapsed time, CPU time, resource uitilization, and power consumption

Display Time – Display Time measures the time from when the user performs an action until the user sees the result of that action on the screen.

Elapsed Time – Most sites continue to perform background work after content has been displayed to the screen. Examples might include downloading the next email in a web mail application or sending analytics back to a provider. From the user’s perspective, the site might appear finished; however, significant work is often occurring which can impact overall responsiveness.

CPU Time – Modern web browsers are almost exclusively limited by the speed of the CPU. Offloading work to the GPU and making the CPU more efficient makes a large impact on performance.

Resource Utilization – Building a fast browser means ensuring resources across the entire PC work well together, including network utilization, memory usage patterns, GPU processing, graphics, memory, and hundreds of other dimensions. Since users run several applications at the same time on their PCs, it’s important for browsers to responsibly share these resources with other applications.

Power Consumption – Increasing power efficiency leads to longer the battery life in mobile scenarios, lower electricity costs for the device, and a smaller environmental impact.

Concentrating only on a single metric creates an overly simplistic view of performance. By focusing on a single metric, humans naturally tend to optimize for that metric, often at the expense of other equally important metrics. The only way to combat that tendency is to measure all aspects of performance, and then make the tradeoffs consciously, rather than implicitly.

In total, the Performance Lab measures over 850 different metrics. Each one provides part of the picture of browser performance. To give a feel for what we measure, here’s a (non-exhaustive) list of key metrics: private working set, total working set, HTTP request count, TCP bytes received, number of binaries loaded, number of context switches, DWM video memory usage, percent GPU utilization, number of paints, CPU time in JavaScript garbage collection, CPU time in JavaScript parsing, average DWM update interval, peak total working set, number of heap allocations, size of heap allocations, number of outstanding heap allocations, size of outstanding heap allocations, CPU time in layout subsystem, CPU time in formatting subsystem, CPU time in rendering subsystem, CPU time in HTML parser subsystem, idle CPU time, number of threads.

Windows event tracing infrastructure

Metrics are gathered using Windows Event Tracing Infrastructure (ETW) and VMMap. ETW is the Windows-wide event logging system that is used by many Windows components and third-party applications, including the Windows Event Log. ETW logging APIs are extremely low level and low overhead, which is critical for performance testing.

The view shows 6 graphs stacked vertically. Graphs are named CPU Usage by Process, Generic Events, WinINet End-to-End Downloads, IE CPU Breakdown, WinInet Transfer Setups, and IE Repaint.

The trace viewer included in WPT, xperfview.exe, is a powerful visualizer that allows correlation and overlaying kernel, CPU, GPU, I/O, networking, and other events. WPT also supports stack walking. Stack walking takes a snapshot of the program’s callstack at regular intervals and saves the stack as part of the trace. By correlating ETW events with stacks, WPT will display not only what work was being done, but the callstack associated with that work and the amount of time spent doing that work, with 10 microsecond resolution. Stack walking can be enabled on any process, even one that does not use ETW events. The drawback to stack walking is that it requires debugging symbols to decode the stacks, and is susceptible to aliasing.

Testing a scenario

The final piece of the puzzle is the actual test process. Testing can be broken into 3 phases: setup, testing, and errors and cleanup. Here’s a flowchart of the entire process to follow along.

A complex flow chart, starting with "User requests run" and ending with "Run is marked finished"

Setup

The process starts when a user requests a run through the lab website or automation framework. The run is placed into a priority queue with other pending runs. When a test client becomes available, it checks the queue and starts the highest priority job that it can. First, the test client installs the Test OS specified. The IE Performance Lab supports testing on Vista, Windows 7, and Windows 8. The test client installs a fresh copy of the Test OS for every run so the machine always starts in a known

good state.

Once the Test OS is installed, the client configures WPT, VMMap, and the test harness. The run also specifies a number of IE settings such as the homepage, use of Suggested Sites, InPrivate browsing, and others. Any third-party software is also installed and configured at this point.

The final step before testing is ensuring that the test client is idle to minimize test interference. Windows defines a concept of idle tasks. Idle tasks are a way for Windows and other developers to schedule non-critical work to happen at a later time when the user is not competing for resources. OS idle tasks include prefetching or SuperFetching, disk defragmentation, updating search indexes, and others, depending on OS version and configured services. To ensure that no idle work is done during the tests, the idle task queue is flushed.

Additionally, Windows Defender is paused and the log location for the test harness is marked as excluded from the Windows Indexing Service to prevent log and trace files from causing the indexer to start during a test run. Testing is done in multiple passes to minimize the number of providers needed, since additional providers increase the observer effect. The first pass is always a warm-up pass. Warm-up ensures that the browser binaries are “warm” and that the maximum amount of cachable page content is available in the WinINET cache. Subsequent passes each focus on a specific type of instrumentation, like stack walking, memory tracing, and I/O and registry tracing.

Errors and cleanup

If at any time during the test the browser crashes, the test pass is considered failed and the run moves on to the next test pass. If at any time during the tests Windows crashes, the computer reboots and the OS is reinstalled, since its state cannot be guaranteed. If the number of retries exceeds the threshold, the whole run is considered failed and the machine moves on to another run to prevent endlessly trying to test an unstable build.

When all the test cases are complete, the test client uploads the logs and traces for analysis. The test client then returns to an idle state and begins polling for a new run.

Results investigation

Each metric is tracked change-over-change. We run each test case a minimum of ten times, and duplicate runs on at least two different machines to create the sample population. Using statistical tools, uncharacteristic results can be automatically flagged for investigation. A variance change is also considered a regression. Users interact with IE under a wide range of circumstances and on a wide range of hardware, and one of our goals is to ensure a smooth and predictable experience every time.

In addition to automated analysis, a triage team investigates the daily results to watch for trends and other interesting behaviors. Manual investigation cannot be eliminated because many statistical tools assume both a normal distribution and that all samples are independent.

Neither assumption may be strictly true for our measurements. Some activities in IE are driven by a timer from the OS, meaning results are also dependent on when (along the timer’s cycle) the page load begins. A page load that starts right before or after a timer interrupt may do more or less work because IE must service the interrupt at different points in the page-load process. This interruption can have a rippling affect that leads to a bimodal distribution. Also, because we use repeated trials (and we don’t wipe the machine between iterations) the next trial is influenced by previous trials. Here’s a sample Elapsed Time graph for Bing Maps for change-over-change comparison.

A bar chart with a red line superimposed. A mouse pointer hovers over one point in the chart, and next to this is a tooltip listing max, median, min, and other info.

The red series shows the median value of each test run, and grey bars show the range. Hovering over a test run will show the iterations for the metric (in blue) as well as a tooltip that provides the exact values for minimum, median, max values, as well as the absolute and relative difference with the previous test run. The tooltip shown in this image also provides additional context like the build being tested, and a quick link to our source control system to view the changes in the build.

The combination of automated analysis and manual investigation provides the IE team with reliable and actionable data for performance tuning.

Testing third-party software

Many third-party applications depend on Trident, the network stack, and other IE components. Extensions like BHOs and toolbars load within the IE context. Other applications, like security software, can inject themselves between IE components. These applications become part of the IE stack, and can lead to poor performance. The Performance Lab is capable of measuring the impact of third-party software on browsing real world content in a controlled environment. These studies are important to IE and the ecosystem because users generally cannot quantify the impact of popular software on their browsing experience.

When testing third-party software impact, we compare a run with third-party software installed, with a clean run with only IE installed, to determine the impact of the software. In particular, we are interested in measuring two metrics: startup time and navigation time. Startup time measures the time it takes to launch the browser and navigate to an URL, whereas navigation time measures the time it takes to navigate to an URL when the browser has already been launched. Startup will also include the time that third-party applications take to load their IE extensions.

Using cached content allows repeatability in our measurements. Further, by measuring a cached site, we can definitively know that a performance regression is caused by the third-party software and not by differences in the site. Whenever measuring the impact of third-party software, we also validate our findings by testing startup and navigation on a direct connection to the Internet, to verify that the testing environment is not responsible for any deltas.

Many third-party applications offload work during a page navigation to cloud services. While parallelization of work and use of cloud services are excellent techniques to improve performance, some applications wait synchronously for the results from the network, blocking the navigation in the process. There are many real world scenarios, like strict firewalls, WAN connections, and offline scenarios, where such patterns can lead to poor performance for users. Third-party software should never process synchronously in response to an IE or user action, and should batch UI and DOM updates to minimize disruption.

Building a fast browser for users

Real world browser performance matters. Measuring performance at scale is a significant investment and a full-time job, but the results are well worth the effort. The data gathered by the Internet Explorer Performance Lab is instrumental in our understanding of browser performance and of the underlying PC hardware, and in developing a fast, fluid, and responsive web experience for users.

—Matt Kotsenas, Jatinder Mann, and Jason Weber for the Internet Explorer Performance Team

IEBlog: The Year in Review: W3C Web Performance Working Group

In this IEBlog article, I look back at a year in the W3C Web Performance Working Group:

17 Aug 2011 9:36 AM

Fast HTML5 Web applications benefit consumers who browse the Web and developers building innovative new experiences. Measuring performance characteristics of Web applications and writing efficient applications are two important aspects of making Web sites fast. Browser manufacturers can rapidly address developers’ needs through interoperable APIs when collaboratively partnering through the W3C.

One year ago today, the W3C announced the formation of a Web Performance Working Group chartered with two goals: making it easier to measure and understand the performance characteristics of Web applications and defining interoperable methods to write more CPU- and power-efficient applications.

Together with Google, Mozilla, Facebook, and other industry and community leaders who participate in the W3C Web Performance Working Group, we designed the Navigation Timing, Resource Timing, User Timing and Performance Timeline specifications to help developers accurately measure Web application performance. The first three specifications, Navigation Timing, Resource Timing, and User Timing, define interfaces for Web applications to access timing information related to the navigation of the document, resources on the page, and developer scripts, respectively. The Performance Timeline specification defines a unifying interface to retrieve this timing data.

Resource Timing, User Timing, and Performance Timeline specifications are all in the Last Call phase of specification. Last Call is a signal that the working group believes the spec is functionally complete and is ready for broad review from both other working groups and the public at large. This Last Call period extends until September 15, 2011. The Navigation Timing specification is already in the Candidate Recommendation phase and has two interoperable implementations, starting with Internet Explorer 9 and Chrome 6. Together these APIs help Web developers create faster and more efficient applications by providing insights into the performance characteristics of their applications that just weren’t possible before.

Over the last four months, the Web Performance Working Group defined interoperable methods to write more CPU- and power-efficient applications by producing the Page Visibility, Timing control for script-based animations, and Efficient Script Yielding specifications. The Page Visibility specification is in the Last Call phase until September 8th and has two implementations starting with the second IE10 Platform Preview and Chrome 13. The requestAnimationFrame API, from the Timing control for script-based animations specification, has three implementations starting with the second IE10 Platform Preview, Firefox 4 and Chrome 10. This specification is very close to entering Last Call. For more information on these two APIs, see the blog posts on using PC Hardware more efficiently with these APIs (link and link). IE10 is the first browser to implement the emerging setImmediate API from the Efficient Script Yielding specification.

It’s encouraging to see how much progress we’ve collectively made in just one year. These APIs are a great example of how quickly new ideas can become interoperable standards that developers can depend on in modern HTML5-enabled browsers. Thanks to everyone in the W3C Web Performance Working Group for helping design these APIs and to other browser vendors for starting to implement these APIs with an eye towards interoperability.

—Jatinder Mann, Program Manager, IE Performance

IEBlog: Using PC Hardware more efficiently in HTML5, Part 2

This IEBlog article is the second part of a two part series I wrote on using PC hardware more efficiently with new Web Performance APIs:

8 Jul 2011 12:56 PM

Web developers need API’s to efficiently take advantage of modern PC hardware through HTML5, improving the performance of Web applications, the power efficiency of the Web platform, and the resulting customer experience. The second IE10 Platform Preview supports emerging API’s from the W3C Web Performance Working Group which enable developers to make the most of the underlying hardware and use battery power more efficiently. This post details how to use Page Visibility, one of the emerging API’s, for better performance and power efficiency.

Page Visibility: adjust work depending on if the user is looking

Knowing whether a page is visible makes it possible for a developer to make better decisions about what the page does, especially around power usage and background tasks. Take a look at the Page Visibility Test Drive to see how a Web application can be aware of whether the page is visible or not to the user.

 
The Page Visibility API is now available through vendor prefixed implementations in IE10 and Chrome 13.

The developer can adjust or scale back what work the page does based on visibility. For example, if a Web based email client is visible, it may check the server for new mail every few seconds. When hidden it might scale checking email to every few minutes. Other examples include a puzzle application that can be paused when the user no longer has the game visible or showing ads only if the page is visible to the user.

The Page Visibility specification enables developers to determine the current visibility of a document and be notified of visibility changes. It consists of two properties and an event:

  • document.hidden: A boolean that describes whether the page is visible or not.
  • document.visibilityState: An attribute that returns the detailed page visibility state, e.g., PAGE_VISIBLE, PAGE_PREVIEW, etc.
  • visibilitychange: An event that gets fired any time the visibility state of the page changes.

IE10 has prefixed these attributes and event with the ‘ms’ vendor prefix.

With this interface, Web applications may choose to alter behavior based on whether they are visible to the user or not. For example, the following JavaScript shows a theoretical Web based email client checking for new emails every second without knowledge of the Page Visibility:


<!DOCTYPE html>
<html>
<head>
<title>Typical setInterval Pattern</title>
<script>
var timer = 0;
var PERIOD = 1000; // check for mail every second

function onLoad() 
{
   timer = setInterval(checkEmail, PERIOD);
}

function checkEmail() 
{
   debugMessage("Checking email at " + new Date().toTimeString());
}

function debugMessage(s) 
{
   var p = document.createElement("p");
   p.appendChild(document.createTextNode(s));
   document.body.appendChild(p);
}
</script>
</head>
<body onload="onLoad()">
</body>
</html>

Using Page Visibility, the same page can throttle back how often it checks email when the page is not visible:


<!DOCTYPE html>
<html>
<head>
<title>Visibility API Example</title>
<script>
var timer = 0;
var PERIOD_VISIBLE = 1000; // 1 second
var PERIOD_NOT_VISIBLE = 10000; // 10 seconds
var vendorHidden, vendorVisibilitychange;

function detectHiddenFeature() 
{
   // draft standard implementation
   if (typeof document.hidden != "undefined") 
   {
      vendorHidden = "hidden";
      vendorVisibilitychange = "visibilitychange";
      return true;
   }
   // IE10 prefixed implementation
   if (typeof document.msHidden != "undefined") 
   {
      vendorHidden = "msHidden";
      vendorVisibilitychange = "msvisibilitychange";
      return true;
   }
   // Chrome 13 prefixed implementation
   if (typeof document.webkitHidden != "undefined") 
   {
      vendorHidden = "webkitHidden";
      vendorVisibilitychange = "webkitvisibilitychange";
      return true;
   }
   // feature is not supported
   return false;
}

function onLoad() 
{
   // if the document.hidden feature is supported, vary interval based on visibility.
   // otherwise, just use setInterval with a fixed time.
   if (detectHiddenFeature()) 
   {
      timer = setInterval(checkEmail, document[vendorHidden] ? PERIOD_NOT_VISIBLE : PERIOD_VISIBLE);
      document.addEventListener(vendorVisibilitychange, visibilityChanged);
   }
   else 
   {
      timer = setInterval(checkEmail, PERIOD_VISIBLE);
   }
}

function checkEmail() 
{
   debugMessage("Checking email at " + new Date().toTimeString());
}

function visibilityChanged() 
{
   clearTimeout(timer);
   timer = setInterval(checkEmail, document[vendorHidden] ? PERIOD_NOT_VISIBLE : PERIOD_VISIBLE);
   debugMessage("Going " + (document[vendorHidden] ? "not " : "") + "visible at " + new Date().toTimeString());
}

function debugMessage(s) 
{
   var p = document.createElement("p");
   p.appendChild(document.createTextNode(s));
   document.body.appendChild(p);
}
</script>
</head>
<body onload="onLoad()">
</body>
</html>

With the Page Visibility API, Web developers can create more power conscious Web applications. To learn about other emerging API from the W3C Web Performance Working Group supported in the second IE10 Platform Preview, read my post on the requestAnimationFrame API (link).

—Jatinder Mann, Internet Explorer Program Manager

IEBlog: Using PC Hardware more efficiently in HTML5, Part 1

In this two part IEBlog article, I discuss how to use PC hardware more efficiently with the new Web Performance APIs.

5 Jul 2011 4:43 PM

Browsers need amazing performance to deliver on the promise of HTML5 applications. Web developers need API’s to efficiently take advantage of modern PC hardware through HTML5 and build high performance Web applications with efficient power use for great, all around customer experiences. The second IE10 Platform Preview supports three emerging API’s from the W3C Web Performance Working Group which enable developers to make the most of the underlying hardware and use battery power more efficiently: requestAnimationFrame, Page Visibility and setImmediate.

Together with Google, Mozilla, and others on the W3C Web Performance Working Group and the community, we designed these new API’s over the last three months. These three API’s are a great example of how quickly new ideas can become interoperable standards developers can depend on in modern HTML5 enabled browsers.

This post details how to use the requestAnimationFrame API for better performance and power efficiency.

requestAnimationFrame API: like setTimeout, but with less wasted effort

Web developers can now schedule animations to reduce power consumption and choppiness. For example, animations today generally occur even when a Web site is in a background tab, minimized, or otherwise not visible, wasting precious battery life. Animations today are not generally aligned with the display’s refresh rate, causing choppiness in the animation. Until the requestAnimationFrame API, the Web platform did not provide an efficient means for Web developers to schedule graphics timers for animations.

Let’s take a look at an example. Most animations use a JavaScript timer resolution of less than 16.7ms to draw animations, even though most monitors can only display at 16.7ms periods (at 60Hz frequency). The first graph below represents the 16.7ms display monitor frequency. The second graph represents a typical setTimout or setInterval of 10ms. In this case, the consumer will never see every third draw because another draw will occur before the display refreshes. This overdrawing results in choppy animations as every third frame is being lost. Reducing the timer resolution can also negatively impact battery life by up to 25%.

Figure: At top, each arrow represents a monitor refresh at 16.7ms periods. Below is a 10ms page redraw cycle. Every third redraw is wasted because the monitor will never show it. The result is choppy animations and wasted battery.

The requestAnimationFrame API is similar to the setTimeout and setInterval API’s developers use today. The key difference is that it notifies the application when the browser needs to update the screen, and only when the browser needs to update the screen. It keeps Web applications perfectly aligned with the browser’s painting, and uses only the necessary amount of resources.

If you take a look at this requestAnimationFrame example, you will notice that even though the animations look identical, the clock drawn with requestAnimationFrame is always more efficient in power consumption, background interference and CPU efficiency than the setTimeout clock.


The requestAnimationFrame animation (right) is more efficient in power consumption, background interference and CPU efficiency than the setTimeout animation (left).

It is a simple change to upgrade your current animations to use this new API. If you are using setTimeout(draw, PERIOD); in your code, you can replace it with requestAnimationFrame. The requestAnimationFrame API is the furthest along of these three API’s with vendor prefixed interoperable implementations available starting with Firefox 4, Chrome 13, and IE10.

Remember, requestAnimationFrame only schedules a single callback, like setTimout, and if subsequent animation frames are needed, then requestAnimationFrame will need to be called again from within the callback. In this Platform Preview, IE implements the API with a vendor prefix like other browsers. Here is an example of how to write cross-browser mark-up that is future proof:


// Future-proof: when feature is fully standardized
if (window.requestAnimationFrame) window.requestAnimationFrame(draw);
// IE implementation
else if (window.msRequestAnimationFrame) window.msRequestAnimationFrame(draw);
// Firefox implementation
else if (window.mozRequestAnimationFrame) window.mozRequestAnimationFrame(draw);
// Chrome implementation
else if (window.webkitRequestAnimationFrame) window.webkitRequestAnimationFrame(draw);
// Other browsers that do not yet support feature
else setTimeout(draw, PERIOD);

Thank you, to everyone in the W3C Web Performance Working Group for helping design these APIs, and thanks to other browsers, for starting to implement these APIs with an eye towards interoperability. With these APIs, Web developers can create a faster and more power conscious Web.

—Jatinder Mann, Internet Explorer Program Manager