Quantum Flow Engineering Newsletter #10

Let’s start this week’s updates with looking at the ongoing efforts to improve the usefulness of the background hang reports data.  With Ben Miroglio’s help, we confirmed that we aren’t blowing up telemetry ping sizes yet by sending native stack traces for BHR hangs, and as a result we can now capture a deeper call stack depth, which means the resulting data will be easier to analyze.  Doug Thayer has also been hard at work at creating a new BHR dashboard based on the perf-html UI.  You can see a sneak peak here, but do note that this is work in progress!  The raw BHR data is still available for your inspection.
Kannan Vijayan has been working on adding some low level instrumentation to SpiderMonkey in order to get some detailed information on the relative runtime costs of various builtin intrinsic operations inside the JS engine in various workloads using the rdtsc instruction on Windows.  He now has a working setup that allows him to take a real world JS workload and get some detailed data on what builtin intrinsics were the most costly in that workload.  This is extremely valuable because it allows us to focus our optimization efforts on these builtins where the most gains are to be achieved first.  He already has some initial results of running this tool on the Speedometer benchmark and on a general browsing workload and some optimization work has already started to happen.
Dominik Strohmeier has been helping with running startup measurements on the reference Acer machine to track the progress of the ongoing startup improvements using an HDMI video capture card.  For these measurements, we are tracking two numbers, one is the first paint times (the time at which we paint the first frame from the browser window) and the other is the hero element time (the time at which we paint the “hero element” which is the search box in about:home in this case.)  The baseline build here is the Nightly of Apr 1st as a date before active work on startup optimizations started.  At that time, our median first paint time was 1232.84ms (with a standard deviation of 16.58ms) and our hero element time was
1849.26ms (with a standard deviation of 28.58ms).  On the Nightly of May 18, our first paint time is 849.66ms (with a standard deviation of 11.78ms) and our hero element time is 1616.02ms (with a standard deviation of 24.59ms).
Next week we’re going to have a small work week with some people from the DOM, JS, Layout, Graphics and Perf teams here in Toronto.  I expect to be fully busy at the work week, so you should expect the next issue of this newsletter in two weeks!  With that, it is time to acknowledge the hard work of those who helped make Firefox faster this past week.  I hope I’m not dropping any names by accident!
Tagged with: , ,
8 comments on “Quantum Flow Engineering Newsletter #10
  1. Can you publish “reference Acer ” spec somewhere ?

  2. manoj says:

    Hi Ehsan,

    This is a long list of changes. The team is on fire!

    There are a number of metrics in these posts that are material to measuring progress: startup time, time to first paint, tab open and close, etc. Two questions:

    1. Have you settled on a firm list that you want to track week-to-week? This might be something you covered in an earlier post, so pardon my ignorance.

    2. If yes, is there a way to see a visual depiction of how this week’s build (with all the changes you cover) compares to last week’s build for the same metrics? Taking this to a logical next step, can folks see how the release version of Firefox compares to the latest nightly builds with said changes?

    (2) will make this real for the non-developers reading your posts, and provide the impetus to folks to get on the Nightly channel.

    Thanks for your hard work.

    • ehsan says:

      There isn’t unfortunately such a list. A lot of these efforts are all undergoing together at the same time, and sometimes one piece is finished in one week and another in another week which causes me to talk about them in separation but in a lot of the cases the actual work ongoing in a lot more parallelized than I may make it sound like. 🙂

      • Hi Ehsan — When you get a moment to step back from the fervent coding and triaging, please consider identifying a short list (5-max) of metrics that matter. These could be all “user impact” related, and product management (Asa used to write about metrics in the past) might be able to assist.

        From reading bugzilla, you clearly use some criteria to prioritize issues. Maybe translating these criteria into metrics for which a combination of Talos and Telemetry can produce you with data is the answer.

        When I read Safari, Firefox or Chrome release notes, the missing part is “what I really get as a user now that all these bugs are fixed”. A graph that compares FF-new to FF-pre says more than a 1000-words would. Bonus: the HN crowd would appreciate this as well.


        • ehsan says:

          Perhaps https://health.graphics/quantum/ is close to what you had in mind?

          In general I would love to do something like what you’re suggesting, but the real difficulty is limiting any list to a short number. 🙂 Even this dashboard is pretty long so I’m still not quite sure if this quite captures what you’re looking for? But I hope it’s somewhat close…

  3. Gerd Neumann says:

    I noticed that closing tabs is really slow in Firefox 54 (compared to closing tabs in Chrome or say Virtual Studio Code which uses Chromium as a basis), especially “Close tabs to the right” or “Other tabs” (context menu when clicking on tab header) when there are more than say 10 tabs open. Slow like taking 5 sec (fast desktop PC) to 20 seconds (old Thinkpad laptop).

    Is this also track somewhere by the Quantum project?

    • ehsan says:

      Yes! And in fact I think (hope!) we have already fixed it for the most part. Firefox 55 should be a lot faster at closing tabs. Tabs should now basically close almost instantaneously.